00:00:00.001 Started by upstream project "autotest-per-patch" build number 126169 00:00:00.001 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.102 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.103 The recommended git tool is: git 00:00:00.103 using credential 00000000-0000-0000-0000-000000000002 00:00:00.106 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.171 Fetching changes from the remote Git repository 00:00:00.173 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.229 Using shallow fetch with depth 1 00:00:00.229 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.229 > git --version # timeout=10 00:00:00.280 > git --version # 'git version 2.39.2' 00:00:00.280 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.318 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.318 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.202 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.216 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.228 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:06.229 > git config core.sparsecheckout # timeout=10 00:00:06.240 > git read-tree -mu HEAD # timeout=10 00:00:06.257 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:06.284 Commit message: "inventory: add WCP3 to free inventory" 00:00:06.284 > git rev-list --no-walk b0ebb039b16703d64cc7534b6e0fa0780ed1e683 # timeout=10 00:00:06.379 [Pipeline] Start of Pipeline 00:00:06.397 [Pipeline] library 00:00:06.399 Loading library shm_lib@master 00:00:06.400 Library shm_lib@master is cached. Copying from home. 00:00:06.417 [Pipeline] node 00:00:06.434 Running on WFP22 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.437 [Pipeline] { 00:00:06.445 [Pipeline] catchError 00:00:06.446 [Pipeline] { 00:00:06.457 [Pipeline] wrap 00:00:06.465 [Pipeline] { 00:00:06.472 [Pipeline] stage 00:00:06.473 [Pipeline] { (Prologue) 00:00:06.713 [Pipeline] sh 00:00:06.997 + logger -p user.info -t JENKINS-CI 00:00:07.020 [Pipeline] echo 00:00:07.021 Node: WFP22 00:00:07.030 [Pipeline] sh 00:00:07.325 [Pipeline] setCustomBuildProperty 00:00:07.338 [Pipeline] echo 00:00:07.339 Cleanup processes 00:00:07.345 [Pipeline] sh 00:00:07.625 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.625 1663461 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.638 [Pipeline] sh 00:00:07.921 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.921 ++ grep -v 'sudo pgrep' 00:00:07.921 ++ awk '{print $1}' 00:00:07.921 + sudo kill -9 00:00:07.921 + true 00:00:07.936 [Pipeline] cleanWs 00:00:07.945 [WS-CLEANUP] Deleting project workspace... 00:00:07.945 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.952 [WS-CLEANUP] done 00:00:07.956 [Pipeline] setCustomBuildProperty 00:00:07.970 [Pipeline] sh 00:00:08.246 + sudo git config --global --replace-all safe.directory '*' 00:00:08.317 [Pipeline] httpRequest 00:00:08.362 [Pipeline] echo 00:00:08.364 Sorcerer 10.211.164.101 is alive 00:00:08.435 [Pipeline] httpRequest 00:00:08.440 HttpMethod: GET 00:00:08.441 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.441 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.453 Response Code: HTTP/1.1 200 OK 00:00:08.453 Success: Status code 200 is in the accepted range: 200,404 00:00:08.453 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:13.103 [Pipeline] sh 00:00:13.386 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:13.406 [Pipeline] httpRequest 00:00:13.426 [Pipeline] echo 00:00:13.428 Sorcerer 10.211.164.101 is alive 00:00:13.438 [Pipeline] httpRequest 00:00:13.461 HttpMethod: GET 00:00:13.462 URL: http://10.211.164.101/packages/spdk_62a72093c08fd8c16f60a79961fc65ceca1d8765.tar.gz 00:00:13.462 Sending request to url: http://10.211.164.101/packages/spdk_62a72093c08fd8c16f60a79961fc65ceca1d8765.tar.gz 00:00:13.463 Response Code: HTTP/1.1 200 OK 00:00:13.463 Success: Status code 200 is in the accepted range: 200,404 00:00:13.464 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_62a72093c08fd8c16f60a79961fc65ceca1d8765.tar.gz 00:01:15.013 [Pipeline] sh 00:01:15.308 + tar --no-same-owner -xf spdk_62a72093c08fd8c16f60a79961fc65ceca1d8765.tar.gz 00:01:17.936 [Pipeline] sh 00:01:18.221 + git -C spdk log --oneline -n5 00:01:18.221 62a72093c bdev: Add bdev_enable_histogram filter 00:01:18.221 719d03c6a sock/uring: only register net impl if supported 00:01:18.221 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:18.221 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:18.221 6c7c1f57e accel: add sequence outstanding stat 00:01:18.234 [Pipeline] } 00:01:18.255 [Pipeline] // stage 00:01:18.266 [Pipeline] stage 00:01:18.268 [Pipeline] { (Prepare) 00:01:18.289 [Pipeline] writeFile 00:01:18.311 [Pipeline] sh 00:01:18.596 + logger -p user.info -t JENKINS-CI 00:01:18.611 [Pipeline] sh 00:01:18.895 + logger -p user.info -t JENKINS-CI 00:01:18.909 [Pipeline] sh 00:01:19.193 + cat autorun-spdk.conf 00:01:19.193 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.193 SPDK_TEST_NVMF=1 00:01:19.193 SPDK_TEST_NVME_CLI=1 00:01:19.193 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.193 SPDK_TEST_NVMF_NICS=e810 00:01:19.193 SPDK_TEST_VFIOUSER=1 00:01:19.193 SPDK_RUN_UBSAN=1 00:01:19.193 NET_TYPE=phy 00:01:19.200 RUN_NIGHTLY=0 00:01:19.206 [Pipeline] readFile 00:01:19.234 [Pipeline] withEnv 00:01:19.236 [Pipeline] { 00:01:19.251 [Pipeline] sh 00:01:19.535 + set -ex 00:01:19.536 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:19.536 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:19.536 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.536 ++ SPDK_TEST_NVMF=1 00:01:19.536 ++ SPDK_TEST_NVME_CLI=1 00:01:19.536 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.536 ++ SPDK_TEST_NVMF_NICS=e810 00:01:19.536 ++ SPDK_TEST_VFIOUSER=1 00:01:19.536 ++ SPDK_RUN_UBSAN=1 00:01:19.536 ++ NET_TYPE=phy 00:01:19.536 ++ RUN_NIGHTLY=0 00:01:19.536 + case $SPDK_TEST_NVMF_NICS in 00:01:19.536 + DRIVERS=ice 00:01:19.536 + [[ tcp == \r\d\m\a ]] 00:01:19.536 + [[ -n ice ]] 00:01:19.536 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:19.536 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:19.536 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:19.536 rmmod: ERROR: Module irdma is not currently loaded 00:01:19.536 rmmod: ERROR: Module i40iw is not currently loaded 00:01:19.536 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:19.536 + true 00:01:19.536 + for D in $DRIVERS 00:01:19.536 + sudo modprobe ice 00:01:19.536 + exit 0 00:01:19.545 [Pipeline] } 00:01:19.567 [Pipeline] // withEnv 00:01:19.573 [Pipeline] } 00:01:19.593 [Pipeline] // stage 00:01:19.603 [Pipeline] catchError 00:01:19.605 [Pipeline] { 00:01:19.622 [Pipeline] timeout 00:01:19.622 Timeout set to expire in 50 min 00:01:19.624 [Pipeline] { 00:01:19.642 [Pipeline] stage 00:01:19.645 [Pipeline] { (Tests) 00:01:19.664 [Pipeline] sh 00:01:19.948 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:19.948 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:19.948 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:19.948 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:19.948 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:19.948 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:19.948 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:19.949 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:19.949 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:19.949 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:19.949 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:19.949 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:19.949 + source /etc/os-release 00:01:19.949 ++ NAME='Fedora Linux' 00:01:19.949 ++ VERSION='38 (Cloud Edition)' 00:01:19.949 ++ ID=fedora 00:01:19.949 ++ VERSION_ID=38 00:01:19.949 ++ VERSION_CODENAME= 00:01:19.949 ++ PLATFORM_ID=platform:f38 00:01:19.949 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:19.949 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:19.949 ++ LOGO=fedora-logo-icon 00:01:19.949 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:19.949 ++ HOME_URL=https://fedoraproject.org/ 00:01:19.949 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:19.949 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:19.949 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:19.949 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:19.949 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:19.949 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:19.949 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:19.949 ++ SUPPORT_END=2024-05-14 00:01:19.949 ++ VARIANT='Cloud Edition' 00:01:19.949 ++ VARIANT_ID=cloud 00:01:19.949 + uname -a 00:01:19.949 Linux spdk-wfp-22 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:19.949 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:23.244 Hugepages 00:01:23.244 node hugesize free / total 00:01:23.244 node0 1048576kB 0 / 0 00:01:23.244 node0 2048kB 0 / 0 00:01:23.244 node1 1048576kB 0 / 0 00:01:23.244 node1 2048kB 0 / 0 00:01:23.244 00:01:23.244 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:23.244 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:23.244 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:23.244 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:23.244 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:23.244 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:23.244 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:23.244 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:23.244 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:23.244 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:23.244 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:23.244 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:23.244 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:23.244 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:23.244 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:23.244 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:23.244 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:23.244 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:23.244 + rm -f /tmp/spdk-ld-path 00:01:23.244 + source autorun-spdk.conf 00:01:23.244 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.244 ++ SPDK_TEST_NVMF=1 00:01:23.244 ++ SPDK_TEST_NVME_CLI=1 00:01:23.244 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.244 ++ SPDK_TEST_NVMF_NICS=e810 00:01:23.244 ++ SPDK_TEST_VFIOUSER=1 00:01:23.244 ++ SPDK_RUN_UBSAN=1 00:01:23.244 ++ NET_TYPE=phy 00:01:23.244 ++ RUN_NIGHTLY=0 00:01:23.244 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:23.244 + [[ -n '' ]] 00:01:23.244 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:23.244 + for M in /var/spdk/build-*-manifest.txt 00:01:23.244 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:23.244 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.244 + for M in /var/spdk/build-*-manifest.txt 00:01:23.244 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:23.244 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:23.244 ++ uname 00:01:23.244 + [[ Linux == \L\i\n\u\x ]] 00:01:23.244 + sudo dmesg -T 00:01:23.244 + sudo dmesg --clear 00:01:23.244 + dmesg_pid=1664904 00:01:23.244 + sudo dmesg -Tw 00:01:23.244 + [[ Fedora Linux == FreeBSD ]] 00:01:23.244 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.244 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:23.244 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:23.244 + [[ -x /usr/src/fio-static/fio ]] 00:01:23.244 + export FIO_BIN=/usr/src/fio-static/fio 00:01:23.244 + FIO_BIN=/usr/src/fio-static/fio 00:01:23.244 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:23.244 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:23.244 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:23.244 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.244 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:23.244 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:23.244 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.244 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:23.244 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:23.244 Test configuration: 00:01:23.244 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.244 SPDK_TEST_NVMF=1 00:01:23.244 SPDK_TEST_NVME_CLI=1 00:01:23.244 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.244 SPDK_TEST_NVMF_NICS=e810 00:01:23.244 SPDK_TEST_VFIOUSER=1 00:01:23.244 SPDK_RUN_UBSAN=1 00:01:23.244 NET_TYPE=phy 00:01:23.244 RUN_NIGHTLY=0 11:27:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:23.244 11:27:51 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:23.244 11:27:51 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:23.244 11:27:51 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:23.245 11:27:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.245 11:27:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.245 11:27:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.245 11:27:51 -- paths/export.sh@5 -- $ export PATH 00:01:23.245 11:27:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:23.245 11:27:51 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:23.245 11:27:51 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:23.245 11:27:51 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721035671.XXXXXX 00:01:23.245 11:27:51 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721035671.zz3PCV 00:01:23.245 11:27:51 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:23.245 11:27:51 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:23.245 11:27:51 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:23.245 11:27:51 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:23.245 11:27:51 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:23.245 11:27:51 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:23.245 11:27:51 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:23.245 11:27:51 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.245 11:27:51 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:23.245 11:27:51 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:23.245 11:27:51 -- pm/common@17 -- $ local monitor 00:01:23.245 11:27:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.245 11:27:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.245 11:27:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.245 11:27:51 -- pm/common@21 -- $ date +%s 00:01:23.245 11:27:51 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:23.245 11:27:51 -- pm/common@21 -- $ date +%s 00:01:23.245 11:27:51 -- pm/common@21 -- $ date +%s 00:01:23.245 11:27:51 -- pm/common@25 -- $ sleep 1 00:01:23.245 11:27:51 -- pm/common@21 -- $ date +%s 00:01:23.245 11:27:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721035671 00:01:23.245 11:27:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721035671 00:01:23.245 11:27:51 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721035671 00:01:23.245 11:27:51 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721035671 00:01:23.245 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721035671_collect-cpu-temp.pm.log 00:01:23.245 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721035671_collect-vmstat.pm.log 00:01:23.245 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721035671_collect-cpu-load.pm.log 00:01:23.245 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721035671_collect-bmc-pm.bmc.pm.log 00:01:24.183 11:27:52 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:24.183 11:27:52 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:24.183 11:27:52 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:24.183 11:27:52 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.183 11:27:52 -- spdk/autobuild.sh@16 -- $ date -u 00:01:24.183 Mon Jul 15 09:27:52 AM UTC 2024 00:01:24.183 11:27:52 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:24.183 v24.09-pre-203-g62a72093c 00:01:24.183 11:27:52 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:24.183 11:27:52 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:24.183 11:27:52 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:24.183 11:27:52 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:24.183 11:27:52 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:24.183 11:27:52 -- common/autotest_common.sh@10 -- $ set +x 00:01:24.183 ************************************ 00:01:24.183 START TEST ubsan 00:01:24.183 ************************************ 00:01:24.183 11:27:52 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:24.183 using ubsan 00:01:24.183 00:01:24.183 real 0m0.000s 00:01:24.183 user 0m0.000s 00:01:24.183 sys 0m0.000s 00:01:24.183 11:27:52 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:24.183 11:27:52 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:24.183 ************************************ 00:01:24.183 END TEST ubsan 00:01:24.183 ************************************ 00:01:24.442 11:27:52 -- common/autotest_common.sh@1142 -- $ return 0 00:01:24.442 11:27:52 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:24.442 11:27:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:24.442 11:27:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:24.442 11:27:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:24.442 11:27:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:24.442 11:27:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:24.442 11:27:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:24.442 11:27:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:24.443 11:27:52 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:24.443 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:24.443 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:24.702 Using 'verbs' RDMA provider 00:01:40.557 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:52.775 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:52.775 Creating mk/config.mk...done. 00:01:52.775 Creating mk/cc.flags.mk...done. 00:01:52.775 Type 'make' to build. 00:01:52.775 11:28:19 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:01:52.775 11:28:19 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:52.775 11:28:19 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:52.775 11:28:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:52.775 ************************************ 00:01:52.775 START TEST make 00:01:52.775 ************************************ 00:01:52.775 11:28:19 make -- common/autotest_common.sh@1123 -- $ make -j112 00:01:52.775 make[1]: Nothing to be done for 'all'. 00:01:53.343 The Meson build system 00:01:53.343 Version: 1.3.1 00:01:53.343 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:53.343 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:53.343 Build type: native build 00:01:53.343 Project name: libvfio-user 00:01:53.343 Project version: 0.0.1 00:01:53.343 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:53.343 C linker for the host machine: cc ld.bfd 2.39-16 00:01:53.343 Host machine cpu family: x86_64 00:01:53.343 Host machine cpu: x86_64 00:01:53.343 Run-time dependency threads found: YES 00:01:53.343 Library dl found: YES 00:01:53.343 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:53.343 Run-time dependency json-c found: YES 0.17 00:01:53.343 Run-time dependency cmocka found: YES 1.1.7 00:01:53.343 Program pytest-3 found: NO 00:01:53.343 Program flake8 found: NO 00:01:53.343 Program misspell-fixer found: NO 00:01:53.343 Program restructuredtext-lint found: NO 00:01:53.343 Program valgrind found: YES (/usr/bin/valgrind) 00:01:53.343 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:53.343 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:53.343 Compiler for C supports arguments -Wwrite-strings: YES 00:01:53.343 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:53.343 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:53.343 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:53.343 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:53.343 Build targets in project: 8 00:01:53.343 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:53.343 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:53.343 00:01:53.343 libvfio-user 0.0.1 00:01:53.343 00:01:53.343 User defined options 00:01:53.343 buildtype : debug 00:01:53.343 default_library: shared 00:01:53.343 libdir : /usr/local/lib 00:01:53.343 00:01:53.343 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:53.600 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:53.859 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:53.859 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:53.859 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:53.859 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:53.859 [5/37] Compiling C object samples/null.p/null.c.o 00:01:53.859 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:53.859 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:53.859 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:53.859 [9/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:53.859 [10/37] Compiling C object samples/server.p/server.c.o 00:01:53.859 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:53.859 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:53.859 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:53.859 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:53.859 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:53.859 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:53.859 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:53.859 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:53.859 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:53.859 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:53.859 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:53.859 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:53.859 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:53.859 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:53.859 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:53.859 [26/37] Compiling C object samples/client.p/client.c.o 00:01:53.859 [27/37] Linking target samples/client 00:01:53.859 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:53.859 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:53.859 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:54.117 [31/37] Linking target test/unit_tests 00:01:54.117 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:54.117 [33/37] Linking target samples/shadow_ioeventfd_server 00:01:54.117 [34/37] Linking target samples/lspci 00:01:54.117 [35/37] Linking target samples/null 00:01:54.117 [36/37] Linking target samples/server 00:01:54.117 [37/37] Linking target samples/gpio-pci-idio-16 00:01:54.117 INFO: autodetecting backend as ninja 00:01:54.117 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:54.117 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:54.375 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:54.375 ninja: no work to do. 00:01:59.645 The Meson build system 00:01:59.645 Version: 1.3.1 00:01:59.645 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:59.645 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:59.645 Build type: native build 00:01:59.645 Program cat found: YES (/usr/bin/cat) 00:01:59.645 Project name: DPDK 00:01:59.645 Project version: 24.03.0 00:01:59.645 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:59.645 C linker for the host machine: cc ld.bfd 2.39-16 00:01:59.645 Host machine cpu family: x86_64 00:01:59.645 Host machine cpu: x86_64 00:01:59.645 Message: ## Building in Developer Mode ## 00:01:59.645 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:59.645 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:59.645 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:59.645 Program python3 found: YES (/usr/bin/python3) 00:01:59.645 Program cat found: YES (/usr/bin/cat) 00:01:59.645 Compiler for C supports arguments -march=native: YES 00:01:59.645 Checking for size of "void *" : 8 00:01:59.645 Checking for size of "void *" : 8 (cached) 00:01:59.645 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:59.645 Library m found: YES 00:01:59.645 Library numa found: YES 00:01:59.645 Has header "numaif.h" : YES 00:01:59.645 Library fdt found: NO 00:01:59.645 Library execinfo found: NO 00:01:59.645 Has header "execinfo.h" : YES 00:01:59.645 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:59.645 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:59.645 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:59.645 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:59.645 Run-time dependency openssl found: YES 3.0.9 00:01:59.645 Run-time dependency libpcap found: YES 1.10.4 00:01:59.645 Has header "pcap.h" with dependency libpcap: YES 00:01:59.645 Compiler for C supports arguments -Wcast-qual: YES 00:01:59.645 Compiler for C supports arguments -Wdeprecated: YES 00:01:59.645 Compiler for C supports arguments -Wformat: YES 00:01:59.645 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:59.645 Compiler for C supports arguments -Wformat-security: NO 00:01:59.645 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:59.645 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:59.645 Compiler for C supports arguments -Wnested-externs: YES 00:01:59.645 Compiler for C supports arguments -Wold-style-definition: YES 00:01:59.645 Compiler for C supports arguments -Wpointer-arith: YES 00:01:59.645 Compiler for C supports arguments -Wsign-compare: YES 00:01:59.645 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:59.645 Compiler for C supports arguments -Wundef: YES 00:01:59.645 Compiler for C supports arguments -Wwrite-strings: YES 00:01:59.645 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:59.645 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:59.645 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:59.645 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:59.645 Program objdump found: YES (/usr/bin/objdump) 00:01:59.645 Compiler for C supports arguments -mavx512f: YES 00:01:59.645 Checking if "AVX512 checking" compiles: YES 00:01:59.645 Fetching value of define "__SSE4_2__" : 1 00:01:59.645 Fetching value of define "__AES__" : 1 00:01:59.645 Fetching value of define "__AVX__" : 1 00:01:59.645 Fetching value of define "__AVX2__" : 1 00:01:59.645 Fetching value of define "__AVX512BW__" : 1 00:01:59.645 Fetching value of define "__AVX512CD__" : 1 00:01:59.645 Fetching value of define "__AVX512DQ__" : 1 00:01:59.645 Fetching value of define "__AVX512F__" : 1 00:01:59.645 Fetching value of define "__AVX512VL__" : 1 00:01:59.645 Fetching value of define "__PCLMUL__" : 1 00:01:59.645 Fetching value of define "__RDRND__" : 1 00:01:59.645 Fetching value of define "__RDSEED__" : 1 00:01:59.645 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:59.645 Fetching value of define "__znver1__" : (undefined) 00:01:59.645 Fetching value of define "__znver2__" : (undefined) 00:01:59.645 Fetching value of define "__znver3__" : (undefined) 00:01:59.645 Fetching value of define "__znver4__" : (undefined) 00:01:59.645 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:59.645 Message: lib/log: Defining dependency "log" 00:01:59.645 Message: lib/kvargs: Defining dependency "kvargs" 00:01:59.645 Message: lib/telemetry: Defining dependency "telemetry" 00:01:59.645 Checking for function "getentropy" : NO 00:01:59.645 Message: lib/eal: Defining dependency "eal" 00:01:59.645 Message: lib/ring: Defining dependency "ring" 00:01:59.645 Message: lib/rcu: Defining dependency "rcu" 00:01:59.645 Message: lib/mempool: Defining dependency "mempool" 00:01:59.645 Message: lib/mbuf: Defining dependency "mbuf" 00:01:59.645 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:59.645 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:59.645 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:59.645 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:59.645 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:59.645 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:59.645 Compiler for C supports arguments -mpclmul: YES 00:01:59.645 Compiler for C supports arguments -maes: YES 00:01:59.645 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:59.645 Compiler for C supports arguments -mavx512bw: YES 00:01:59.645 Compiler for C supports arguments -mavx512dq: YES 00:01:59.645 Compiler for C supports arguments -mavx512vl: YES 00:01:59.645 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:59.645 Compiler for C supports arguments -mavx2: YES 00:01:59.645 Compiler for C supports arguments -mavx: YES 00:01:59.646 Message: lib/net: Defining dependency "net" 00:01:59.646 Message: lib/meter: Defining dependency "meter" 00:01:59.646 Message: lib/ethdev: Defining dependency "ethdev" 00:01:59.646 Message: lib/pci: Defining dependency "pci" 00:01:59.646 Message: lib/cmdline: Defining dependency "cmdline" 00:01:59.646 Message: lib/hash: Defining dependency "hash" 00:01:59.646 Message: lib/timer: Defining dependency "timer" 00:01:59.646 Message: lib/compressdev: Defining dependency "compressdev" 00:01:59.646 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:59.646 Message: lib/dmadev: Defining dependency "dmadev" 00:01:59.646 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:59.646 Message: lib/power: Defining dependency "power" 00:01:59.646 Message: lib/reorder: Defining dependency "reorder" 00:01:59.646 Message: lib/security: Defining dependency "security" 00:01:59.646 Has header "linux/userfaultfd.h" : YES 00:01:59.646 Has header "linux/vduse.h" : YES 00:01:59.646 Message: lib/vhost: Defining dependency "vhost" 00:01:59.646 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:59.646 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:59.646 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:59.646 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:59.646 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:59.646 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:59.646 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:59.646 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:59.646 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:59.646 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:59.646 Program doxygen found: YES (/usr/bin/doxygen) 00:01:59.646 Configuring doxy-api-html.conf using configuration 00:01:59.646 Configuring doxy-api-man.conf using configuration 00:01:59.646 Program mandb found: YES (/usr/bin/mandb) 00:01:59.646 Program sphinx-build found: NO 00:01:59.646 Configuring rte_build_config.h using configuration 00:01:59.646 Message: 00:01:59.646 ================= 00:01:59.646 Applications Enabled 00:01:59.646 ================= 00:01:59.646 00:01:59.646 apps: 00:01:59.646 00:01:59.646 00:01:59.646 Message: 00:01:59.646 ================= 00:01:59.646 Libraries Enabled 00:01:59.646 ================= 00:01:59.646 00:01:59.646 libs: 00:01:59.646 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:59.646 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:59.646 cryptodev, dmadev, power, reorder, security, vhost, 00:01:59.646 00:01:59.646 Message: 00:01:59.646 =============== 00:01:59.646 Drivers Enabled 00:01:59.646 =============== 00:01:59.646 00:01:59.646 common: 00:01:59.646 00:01:59.646 bus: 00:01:59.646 pci, vdev, 00:01:59.646 mempool: 00:01:59.646 ring, 00:01:59.646 dma: 00:01:59.646 00:01:59.646 net: 00:01:59.646 00:01:59.646 crypto: 00:01:59.646 00:01:59.646 compress: 00:01:59.646 00:01:59.646 vdpa: 00:01:59.646 00:01:59.646 00:01:59.646 Message: 00:01:59.646 ================= 00:01:59.646 Content Skipped 00:01:59.646 ================= 00:01:59.646 00:01:59.646 apps: 00:01:59.646 dumpcap: explicitly disabled via build config 00:01:59.646 graph: explicitly disabled via build config 00:01:59.646 pdump: explicitly disabled via build config 00:01:59.646 proc-info: explicitly disabled via build config 00:01:59.646 test-acl: explicitly disabled via build config 00:01:59.646 test-bbdev: explicitly disabled via build config 00:01:59.646 test-cmdline: explicitly disabled via build config 00:01:59.646 test-compress-perf: explicitly disabled via build config 00:01:59.646 test-crypto-perf: explicitly disabled via build config 00:01:59.646 test-dma-perf: explicitly disabled via build config 00:01:59.646 test-eventdev: explicitly disabled via build config 00:01:59.646 test-fib: explicitly disabled via build config 00:01:59.646 test-flow-perf: explicitly disabled via build config 00:01:59.646 test-gpudev: explicitly disabled via build config 00:01:59.646 test-mldev: explicitly disabled via build config 00:01:59.646 test-pipeline: explicitly disabled via build config 00:01:59.646 test-pmd: explicitly disabled via build config 00:01:59.646 test-regex: explicitly disabled via build config 00:01:59.646 test-sad: explicitly disabled via build config 00:01:59.646 test-security-perf: explicitly disabled via build config 00:01:59.646 00:01:59.646 libs: 00:01:59.646 argparse: explicitly disabled via build config 00:01:59.646 metrics: explicitly disabled via build config 00:01:59.646 acl: explicitly disabled via build config 00:01:59.646 bbdev: explicitly disabled via build config 00:01:59.646 bitratestats: explicitly disabled via build config 00:01:59.646 bpf: explicitly disabled via build config 00:01:59.646 cfgfile: explicitly disabled via build config 00:01:59.646 distributor: explicitly disabled via build config 00:01:59.646 efd: explicitly disabled via build config 00:01:59.646 eventdev: explicitly disabled via build config 00:01:59.646 dispatcher: explicitly disabled via build config 00:01:59.646 gpudev: explicitly disabled via build config 00:01:59.646 gro: explicitly disabled via build config 00:01:59.646 gso: explicitly disabled via build config 00:01:59.646 ip_frag: explicitly disabled via build config 00:01:59.646 jobstats: explicitly disabled via build config 00:01:59.646 latencystats: explicitly disabled via build config 00:01:59.646 lpm: explicitly disabled via build config 00:01:59.646 member: explicitly disabled via build config 00:01:59.646 pcapng: explicitly disabled via build config 00:01:59.646 rawdev: explicitly disabled via build config 00:01:59.646 regexdev: explicitly disabled via build config 00:01:59.646 mldev: explicitly disabled via build config 00:01:59.646 rib: explicitly disabled via build config 00:01:59.646 sched: explicitly disabled via build config 00:01:59.646 stack: explicitly disabled via build config 00:01:59.646 ipsec: explicitly disabled via build config 00:01:59.646 pdcp: explicitly disabled via build config 00:01:59.646 fib: explicitly disabled via build config 00:01:59.646 port: explicitly disabled via build config 00:01:59.646 pdump: explicitly disabled via build config 00:01:59.646 table: explicitly disabled via build config 00:01:59.646 pipeline: explicitly disabled via build config 00:01:59.646 graph: explicitly disabled via build config 00:01:59.646 node: explicitly disabled via build config 00:01:59.646 00:01:59.646 drivers: 00:01:59.646 common/cpt: not in enabled drivers build config 00:01:59.646 common/dpaax: not in enabled drivers build config 00:01:59.646 common/iavf: not in enabled drivers build config 00:01:59.646 common/idpf: not in enabled drivers build config 00:01:59.646 common/ionic: not in enabled drivers build config 00:01:59.646 common/mvep: not in enabled drivers build config 00:01:59.646 common/octeontx: not in enabled drivers build config 00:01:59.646 bus/auxiliary: not in enabled drivers build config 00:01:59.646 bus/cdx: not in enabled drivers build config 00:01:59.646 bus/dpaa: not in enabled drivers build config 00:01:59.646 bus/fslmc: not in enabled drivers build config 00:01:59.646 bus/ifpga: not in enabled drivers build config 00:01:59.646 bus/platform: not in enabled drivers build config 00:01:59.646 bus/uacce: not in enabled drivers build config 00:01:59.646 bus/vmbus: not in enabled drivers build config 00:01:59.646 common/cnxk: not in enabled drivers build config 00:01:59.646 common/mlx5: not in enabled drivers build config 00:01:59.646 common/nfp: not in enabled drivers build config 00:01:59.646 common/nitrox: not in enabled drivers build config 00:01:59.646 common/qat: not in enabled drivers build config 00:01:59.646 common/sfc_efx: not in enabled drivers build config 00:01:59.646 mempool/bucket: not in enabled drivers build config 00:01:59.646 mempool/cnxk: not in enabled drivers build config 00:01:59.646 mempool/dpaa: not in enabled drivers build config 00:01:59.646 mempool/dpaa2: not in enabled drivers build config 00:01:59.646 mempool/octeontx: not in enabled drivers build config 00:01:59.646 mempool/stack: not in enabled drivers build config 00:01:59.646 dma/cnxk: not in enabled drivers build config 00:01:59.646 dma/dpaa: not in enabled drivers build config 00:01:59.646 dma/dpaa2: not in enabled drivers build config 00:01:59.646 dma/hisilicon: not in enabled drivers build config 00:01:59.646 dma/idxd: not in enabled drivers build config 00:01:59.646 dma/ioat: not in enabled drivers build config 00:01:59.646 dma/skeleton: not in enabled drivers build config 00:01:59.646 net/af_packet: not in enabled drivers build config 00:01:59.646 net/af_xdp: not in enabled drivers build config 00:01:59.646 net/ark: not in enabled drivers build config 00:01:59.646 net/atlantic: not in enabled drivers build config 00:01:59.646 net/avp: not in enabled drivers build config 00:01:59.646 net/axgbe: not in enabled drivers build config 00:01:59.646 net/bnx2x: not in enabled drivers build config 00:01:59.646 net/bnxt: not in enabled drivers build config 00:01:59.646 net/bonding: not in enabled drivers build config 00:01:59.646 net/cnxk: not in enabled drivers build config 00:01:59.646 net/cpfl: not in enabled drivers build config 00:01:59.646 net/cxgbe: not in enabled drivers build config 00:01:59.646 net/dpaa: not in enabled drivers build config 00:01:59.646 net/dpaa2: not in enabled drivers build config 00:01:59.646 net/e1000: not in enabled drivers build config 00:01:59.646 net/ena: not in enabled drivers build config 00:01:59.646 net/enetc: not in enabled drivers build config 00:01:59.646 net/enetfec: not in enabled drivers build config 00:01:59.646 net/enic: not in enabled drivers build config 00:01:59.646 net/failsafe: not in enabled drivers build config 00:01:59.646 net/fm10k: not in enabled drivers build config 00:01:59.646 net/gve: not in enabled drivers build config 00:01:59.646 net/hinic: not in enabled drivers build config 00:01:59.646 net/hns3: not in enabled drivers build config 00:01:59.646 net/i40e: not in enabled drivers build config 00:01:59.646 net/iavf: not in enabled drivers build config 00:01:59.646 net/ice: not in enabled drivers build config 00:01:59.646 net/idpf: not in enabled drivers build config 00:01:59.646 net/igc: not in enabled drivers build config 00:01:59.646 net/ionic: not in enabled drivers build config 00:01:59.646 net/ipn3ke: not in enabled drivers build config 00:01:59.646 net/ixgbe: not in enabled drivers build config 00:01:59.646 net/mana: not in enabled drivers build config 00:01:59.646 net/memif: not in enabled drivers build config 00:01:59.646 net/mlx4: not in enabled drivers build config 00:01:59.646 net/mlx5: not in enabled drivers build config 00:01:59.646 net/mvneta: not in enabled drivers build config 00:01:59.646 net/mvpp2: not in enabled drivers build config 00:01:59.646 net/netvsc: not in enabled drivers build config 00:01:59.646 net/nfb: not in enabled drivers build config 00:01:59.647 net/nfp: not in enabled drivers build config 00:01:59.647 net/ngbe: not in enabled drivers build config 00:01:59.647 net/null: not in enabled drivers build config 00:01:59.647 net/octeontx: not in enabled drivers build config 00:01:59.647 net/octeon_ep: not in enabled drivers build config 00:01:59.647 net/pcap: not in enabled drivers build config 00:01:59.647 net/pfe: not in enabled drivers build config 00:01:59.647 net/qede: not in enabled drivers build config 00:01:59.647 net/ring: not in enabled drivers build config 00:01:59.647 net/sfc: not in enabled drivers build config 00:01:59.647 net/softnic: not in enabled drivers build config 00:01:59.647 net/tap: not in enabled drivers build config 00:01:59.647 net/thunderx: not in enabled drivers build config 00:01:59.647 net/txgbe: not in enabled drivers build config 00:01:59.647 net/vdev_netvsc: not in enabled drivers build config 00:01:59.647 net/vhost: not in enabled drivers build config 00:01:59.647 net/virtio: not in enabled drivers build config 00:01:59.647 net/vmxnet3: not in enabled drivers build config 00:01:59.647 raw/*: missing internal dependency, "rawdev" 00:01:59.647 crypto/armv8: not in enabled drivers build config 00:01:59.647 crypto/bcmfs: not in enabled drivers build config 00:01:59.647 crypto/caam_jr: not in enabled drivers build config 00:01:59.647 crypto/ccp: not in enabled drivers build config 00:01:59.647 crypto/cnxk: not in enabled drivers build config 00:01:59.647 crypto/dpaa_sec: not in enabled drivers build config 00:01:59.647 crypto/dpaa2_sec: not in enabled drivers build config 00:01:59.647 crypto/ipsec_mb: not in enabled drivers build config 00:01:59.647 crypto/mlx5: not in enabled drivers build config 00:01:59.647 crypto/mvsam: not in enabled drivers build config 00:01:59.647 crypto/nitrox: not in enabled drivers build config 00:01:59.647 crypto/null: not in enabled drivers build config 00:01:59.647 crypto/octeontx: not in enabled drivers build config 00:01:59.647 crypto/openssl: not in enabled drivers build config 00:01:59.647 crypto/scheduler: not in enabled drivers build config 00:01:59.647 crypto/uadk: not in enabled drivers build config 00:01:59.647 crypto/virtio: not in enabled drivers build config 00:01:59.647 compress/isal: not in enabled drivers build config 00:01:59.647 compress/mlx5: not in enabled drivers build config 00:01:59.647 compress/nitrox: not in enabled drivers build config 00:01:59.647 compress/octeontx: not in enabled drivers build config 00:01:59.647 compress/zlib: not in enabled drivers build config 00:01:59.647 regex/*: missing internal dependency, "regexdev" 00:01:59.647 ml/*: missing internal dependency, "mldev" 00:01:59.647 vdpa/ifc: not in enabled drivers build config 00:01:59.647 vdpa/mlx5: not in enabled drivers build config 00:01:59.647 vdpa/nfp: not in enabled drivers build config 00:01:59.647 vdpa/sfc: not in enabled drivers build config 00:01:59.647 event/*: missing internal dependency, "eventdev" 00:01:59.647 baseband/*: missing internal dependency, "bbdev" 00:01:59.647 gpu/*: missing internal dependency, "gpudev" 00:01:59.647 00:01:59.647 00:01:59.906 Build targets in project: 85 00:01:59.906 00:01:59.906 DPDK 24.03.0 00:01:59.906 00:01:59.906 User defined options 00:01:59.906 buildtype : debug 00:01:59.906 default_library : shared 00:01:59.906 libdir : lib 00:01:59.906 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:59.906 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:59.906 c_link_args : 00:01:59.906 cpu_instruction_set: native 00:01:59.906 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:59.906 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:59.906 enable_docs : false 00:01:59.906 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:59.906 enable_kmods : false 00:01:59.906 max_lcores : 128 00:01:59.906 tests : false 00:01:59.906 00:01:59.906 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:00.482 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:00.482 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:00.482 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:00.482 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:00.482 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:00.482 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:00.482 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:00.482 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:00.482 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:00.482 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:00.482 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:00.482 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:00.482 [12/268] Linking static target lib/librte_kvargs.a 00:02:00.482 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:00.482 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:00.744 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:00.744 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:00.744 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:00.744 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:00.744 [19/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:00.744 [20/268] Linking static target lib/librte_log.a 00:02:00.744 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:00.744 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:00.744 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:00.744 [24/268] Linking static target lib/librte_pci.a 00:02:00.744 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:00.744 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:00.744 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:00.744 [28/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:00.744 [29/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:00.744 [30/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:00.744 [31/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:00.744 [32/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:00.744 [33/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:01.002 [34/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:01.002 [35/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:01.002 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:01.002 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:01.002 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:01.002 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:01.002 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:01.002 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:01.002 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:01.003 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:01.003 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:01.003 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:01.003 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:01.003 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:01.003 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:01.003 [49/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:01.003 [50/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:01.003 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:01.003 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:01.003 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:01.003 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:01.003 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:01.003 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:01.003 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:01.003 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:01.003 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:01.003 [60/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:01.003 [61/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:01.003 [62/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:01.003 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:01.003 [64/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:01.003 [65/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.003 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:01.003 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:01.003 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:01.003 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:01.003 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:01.003 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:01.003 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:01.003 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:01.003 [74/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:01.003 [75/268] Linking static target lib/librte_meter.a 00:02:01.003 [76/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:01.003 [77/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:01.003 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:01.003 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:01.003 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:01.003 [81/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:01.003 [82/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:01.003 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:01.262 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:01.262 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:01.262 [86/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.262 [87/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:01.262 [88/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:01.262 [89/268] Linking static target lib/librte_telemetry.a 00:02:01.262 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:01.262 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:01.262 [92/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:01.262 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:01.262 [94/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:01.262 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:01.262 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:01.262 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:01.262 [98/268] Linking static target lib/librte_ring.a 00:02:01.262 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:01.262 [100/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:01.262 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:01.262 [102/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:01.262 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:01.262 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:01.262 [105/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:01.262 [106/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:01.262 [107/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:01.262 [108/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:01.262 [109/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:01.262 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:01.262 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:01.262 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:01.262 [113/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:01.262 [114/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:01.262 [115/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:01.262 [116/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:01.262 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:01.262 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:01.262 [119/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:01.262 [120/268] Linking static target lib/librte_rcu.a 00:02:01.262 [121/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:01.262 [122/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:01.262 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:01.262 [124/268] Linking static target lib/librte_cmdline.a 00:02:01.262 [125/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:01.262 [126/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:01.262 [127/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:01.262 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:01.262 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:01.262 [130/268] Linking static target lib/librte_mempool.a 00:02:01.262 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:01.262 [132/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:01.262 [133/268] Linking static target lib/librte_eal.a 00:02:01.262 [134/268] Linking static target lib/librte_timer.a 00:02:01.262 [135/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:01.262 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:01.262 [137/268] Linking static target lib/librte_net.a 00:02:01.262 [138/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:01.262 [139/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:01.262 [140/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:01.262 [141/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:01.262 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:01.262 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:01.262 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:01.262 [145/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:01.262 [146/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:01.262 [147/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:01.262 [148/268] Linking static target lib/librte_mbuf.a 00:02:01.262 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:01.262 [150/268] Linking static target lib/librte_dmadev.a 00:02:01.262 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:01.521 [152/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.521 [153/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:01.521 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:01.521 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:01.521 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:01.521 [157/268] Linking static target lib/librte_compressdev.a 00:02:01.521 [158/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:01.521 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:01.521 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:01.521 [161/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.521 [162/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:01.521 [163/268] Linking target lib/librte_log.so.24.1 00:02:01.521 [164/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:01.521 [165/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:01.521 [166/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.521 [167/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:01.521 [168/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:01.521 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:01.521 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:01.521 [171/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:01.521 [172/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:01.521 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:01.521 [174/268] Linking static target lib/librte_reorder.a 00:02:01.521 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:01.521 [176/268] Linking static target lib/librte_power.a 00:02:01.521 [177/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:01.521 [178/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:01.521 [179/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:01.521 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:01.521 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:01.521 [182/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:01.521 [183/268] Linking static target lib/librte_hash.a 00:02:01.521 [184/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:01.521 [185/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.521 [186/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.521 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:01.521 [188/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:01.521 [189/268] Linking static target lib/librte_security.a 00:02:01.780 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:01.780 [191/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:01.780 [192/268] Linking target lib/librte_kvargs.so.24.1 00:02:01.780 [193/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:01.780 [194/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.780 [195/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:01.780 [196/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:01.780 [197/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:01.780 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:01.780 [199/268] Linking static target lib/librte_cryptodev.a 00:02:01.780 [200/268] Linking static target drivers/librte_bus_vdev.a 00:02:01.780 [201/268] Linking target lib/librte_telemetry.so.24.1 00:02:01.780 [202/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:01.780 [203/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.780 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:01.780 [205/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:01.780 [206/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:01.780 [207/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:01.780 [208/268] Linking static target drivers/librte_mempool_ring.a 00:02:01.780 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:01.780 [210/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:01.780 [211/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:01.780 [212/268] Linking static target drivers/librte_bus_pci.a 00:02:01.780 [213/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:02.039 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.039 [215/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:02.039 [216/268] Linking static target lib/librte_ethdev.a 00:02:02.039 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.039 [218/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.298 [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.298 [220/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.298 [221/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.298 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.298 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:02.556 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.556 [225/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.556 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.556 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.124 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:03.383 [229/268] Linking static target lib/librte_vhost.a 00:02:03.950 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.849 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.430 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.360 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.360 [234/268] Linking target lib/librte_eal.so.24.1 00:02:14.618 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:14.618 [236/268] Linking target lib/librte_meter.so.24.1 00:02:14.618 [237/268] Linking target lib/librte_timer.so.24.1 00:02:14.618 [238/268] Linking target lib/librte_pci.so.24.1 00:02:14.618 [239/268] Linking target lib/librte_ring.so.24.1 00:02:14.618 [240/268] Linking target lib/librte_dmadev.so.24.1 00:02:14.618 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:14.618 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:14.618 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:14.618 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:14.618 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:14.618 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:14.877 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:14.877 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:14.877 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:14.877 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:14.877 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:14.877 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:14.877 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:15.136 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:15.136 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:15.136 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:15.136 [257/268] Linking target lib/librte_net.so.24.1 00:02:15.136 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:15.393 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:15.393 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:15.393 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:15.393 [262/268] Linking target lib/librte_hash.so.24.1 00:02:15.393 [263/268] Linking target lib/librte_security.so.24.1 00:02:15.393 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:15.393 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:15.393 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:15.651 [267/268] Linking target lib/librte_power.so.24.1 00:02:15.651 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:15.651 INFO: autodetecting backend as ninja 00:02:15.651 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 112 00:02:16.585 CC lib/ut/ut.o 00:02:16.585 CC lib/ut_mock/mock.o 00:02:16.585 CC lib/log/log.o 00:02:16.585 CC lib/log/log_deprecated.o 00:02:16.585 CC lib/log/log_flags.o 00:02:16.843 LIB libspdk_ut.a 00:02:16.843 LIB libspdk_ut_mock.a 00:02:16.843 LIB libspdk_log.a 00:02:16.843 SO libspdk_ut.so.2.0 00:02:16.843 SO libspdk_ut_mock.so.6.0 00:02:16.843 SO libspdk_log.so.7.0 00:02:16.843 SYMLINK libspdk_ut.so 00:02:16.843 SYMLINK libspdk_ut_mock.so 00:02:16.843 SYMLINK libspdk_log.so 00:02:17.409 CC lib/dma/dma.o 00:02:17.409 CC lib/ioat/ioat.o 00:02:17.409 CC lib/util/base64.o 00:02:17.409 CC lib/util/bit_array.o 00:02:17.409 CC lib/util/cpuset.o 00:02:17.409 CC lib/util/crc16.o 00:02:17.409 CC lib/util/crc32.o 00:02:17.409 CC lib/util/crc32c.o 00:02:17.409 CC lib/util/crc32_ieee.o 00:02:17.409 CC lib/util/crc64.o 00:02:17.409 CXX lib/trace_parser/trace.o 00:02:17.409 CC lib/util/dif.o 00:02:17.409 CC lib/util/fd.o 00:02:17.409 CC lib/util/file.o 00:02:17.409 CC lib/util/hexlify.o 00:02:17.409 CC lib/util/iov.o 00:02:17.409 CC lib/util/math.o 00:02:17.409 CC lib/util/pipe.o 00:02:17.409 CC lib/util/strerror_tls.o 00:02:17.409 CC lib/util/string.o 00:02:17.409 CC lib/util/uuid.o 00:02:17.409 CC lib/util/fd_group.o 00:02:17.409 CC lib/util/xor.o 00:02:17.409 CC lib/util/zipf.o 00:02:17.409 CC lib/vfio_user/host/vfio_user_pci.o 00:02:17.409 CC lib/vfio_user/host/vfio_user.o 00:02:17.409 LIB libspdk_dma.a 00:02:17.409 SO libspdk_dma.so.4.0 00:02:17.666 LIB libspdk_ioat.a 00:02:17.666 SYMLINK libspdk_dma.so 00:02:17.666 SO libspdk_ioat.so.7.0 00:02:17.666 SYMLINK libspdk_ioat.so 00:02:17.666 LIB libspdk_vfio_user.a 00:02:17.666 SO libspdk_vfio_user.so.5.0 00:02:17.666 LIB libspdk_util.a 00:02:17.666 SYMLINK libspdk_vfio_user.so 00:02:17.924 SO libspdk_util.so.9.1 00:02:17.924 SYMLINK libspdk_util.so 00:02:17.924 LIB libspdk_trace_parser.a 00:02:17.924 SO libspdk_trace_parser.so.5.0 00:02:18.184 SYMLINK libspdk_trace_parser.so 00:02:18.184 CC lib/vmd/vmd.o 00:02:18.184 CC lib/vmd/led.o 00:02:18.184 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:18.184 CC lib/rdma_provider/common.o 00:02:18.184 CC lib/conf/conf.o 00:02:18.184 CC lib/env_dpdk/env.o 00:02:18.184 CC lib/env_dpdk/memory.o 00:02:18.184 CC lib/env_dpdk/pci.o 00:02:18.184 CC lib/json/json_parse.o 00:02:18.184 CC lib/json/json_util.o 00:02:18.184 CC lib/json/json_write.o 00:02:18.184 CC lib/env_dpdk/init.o 00:02:18.184 CC lib/env_dpdk/threads.o 00:02:18.184 CC lib/env_dpdk/pci_ioat.o 00:02:18.184 CC lib/env_dpdk/pci_virtio.o 00:02:18.184 CC lib/env_dpdk/pci_vmd.o 00:02:18.184 CC lib/env_dpdk/pci_event.o 00:02:18.184 CC lib/env_dpdk/pci_idxd.o 00:02:18.184 CC lib/env_dpdk/sigbus_handler.o 00:02:18.184 CC lib/rdma_utils/rdma_utils.o 00:02:18.184 CC lib/idxd/idxd.o 00:02:18.184 CC lib/env_dpdk/pci_dpdk.o 00:02:18.184 CC lib/idxd/idxd_user.o 00:02:18.184 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:18.184 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:18.184 CC lib/idxd/idxd_kernel.o 00:02:18.441 LIB libspdk_rdma_provider.a 00:02:18.441 LIB libspdk_conf.a 00:02:18.441 SO libspdk_rdma_provider.so.6.0 00:02:18.441 SO libspdk_conf.so.6.0 00:02:18.722 LIB libspdk_rdma_utils.a 00:02:18.722 LIB libspdk_json.a 00:02:18.722 SYMLINK libspdk_conf.so 00:02:18.722 SYMLINK libspdk_rdma_provider.so 00:02:18.722 SO libspdk_rdma_utils.so.1.0 00:02:18.722 SO libspdk_json.so.6.0 00:02:18.722 SYMLINK libspdk_rdma_utils.so 00:02:18.722 SYMLINK libspdk_json.so 00:02:18.722 LIB libspdk_idxd.a 00:02:18.722 LIB libspdk_vmd.a 00:02:18.722 SO libspdk_idxd.so.12.0 00:02:18.722 SO libspdk_vmd.so.6.0 00:02:18.980 SYMLINK libspdk_idxd.so 00:02:18.980 SYMLINK libspdk_vmd.so 00:02:18.980 CC lib/jsonrpc/jsonrpc_server.o 00:02:18.980 CC lib/jsonrpc/jsonrpc_client.o 00:02:18.980 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:18.980 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:19.237 LIB libspdk_jsonrpc.a 00:02:19.237 LIB libspdk_env_dpdk.a 00:02:19.237 SO libspdk_jsonrpc.so.6.0 00:02:19.495 SO libspdk_env_dpdk.so.14.1 00:02:19.495 SYMLINK libspdk_jsonrpc.so 00:02:19.495 SYMLINK libspdk_env_dpdk.so 00:02:19.756 CC lib/rpc/rpc.o 00:02:20.013 LIB libspdk_rpc.a 00:02:20.013 SO libspdk_rpc.so.6.0 00:02:20.013 SYMLINK libspdk_rpc.so 00:02:20.576 CC lib/keyring/keyring.o 00:02:20.576 CC lib/keyring/keyring_rpc.o 00:02:20.576 CC lib/notify/notify.o 00:02:20.576 CC lib/notify/notify_rpc.o 00:02:20.576 CC lib/trace/trace.o 00:02:20.576 CC lib/trace/trace_flags.o 00:02:20.576 CC lib/trace/trace_rpc.o 00:02:20.576 LIB libspdk_notify.a 00:02:20.576 LIB libspdk_keyring.a 00:02:20.576 SO libspdk_notify.so.6.0 00:02:20.576 LIB libspdk_trace.a 00:02:20.576 SO libspdk_keyring.so.1.0 00:02:20.576 SYMLINK libspdk_notify.so 00:02:20.576 SO libspdk_trace.so.10.0 00:02:20.833 SYMLINK libspdk_keyring.so 00:02:20.833 SYMLINK libspdk_trace.so 00:02:21.089 CC lib/thread/thread.o 00:02:21.089 CC lib/thread/iobuf.o 00:02:21.089 CC lib/sock/sock.o 00:02:21.089 CC lib/sock/sock_rpc.o 00:02:21.346 LIB libspdk_sock.a 00:02:21.603 SO libspdk_sock.so.10.0 00:02:21.603 SYMLINK libspdk_sock.so 00:02:21.861 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:21.861 CC lib/nvme/nvme_ctrlr.o 00:02:21.861 CC lib/nvme/nvme_fabric.o 00:02:21.861 CC lib/nvme/nvme_ns_cmd.o 00:02:21.861 CC lib/nvme/nvme_ns.o 00:02:21.861 CC lib/nvme/nvme_pcie_common.o 00:02:21.861 CC lib/nvme/nvme_pcie.o 00:02:21.861 CC lib/nvme/nvme_qpair.o 00:02:21.861 CC lib/nvme/nvme.o 00:02:21.861 CC lib/nvme/nvme_transport.o 00:02:21.861 CC lib/nvme/nvme_quirks.o 00:02:21.861 CC lib/nvme/nvme_discovery.o 00:02:21.861 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:21.861 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:21.861 CC lib/nvme/nvme_tcp.o 00:02:21.861 CC lib/nvme/nvme_opal.o 00:02:21.861 CC lib/nvme/nvme_io_msg.o 00:02:21.861 CC lib/nvme/nvme_poll_group.o 00:02:21.861 CC lib/nvme/nvme_stubs.o 00:02:21.861 CC lib/nvme/nvme_zns.o 00:02:21.861 CC lib/nvme/nvme_auth.o 00:02:21.861 CC lib/nvme/nvme_cuse.o 00:02:21.861 CC lib/nvme/nvme_vfio_user.o 00:02:21.861 CC lib/nvme/nvme_rdma.o 00:02:22.118 LIB libspdk_thread.a 00:02:22.118 SO libspdk_thread.so.10.1 00:02:22.376 SYMLINK libspdk_thread.so 00:02:22.633 CC lib/vfu_tgt/tgt_endpoint.o 00:02:22.633 CC lib/vfu_tgt/tgt_rpc.o 00:02:22.633 CC lib/virtio/virtio.o 00:02:22.633 CC lib/virtio/virtio_vhost_user.o 00:02:22.633 CC lib/virtio/virtio_vfio_user.o 00:02:22.633 CC lib/accel/accel.o 00:02:22.633 CC lib/blob/request.o 00:02:22.633 CC lib/blob/blobstore.o 00:02:22.633 CC lib/accel/accel_sw.o 00:02:22.633 CC lib/virtio/virtio_pci.o 00:02:22.633 CC lib/accel/accel_rpc.o 00:02:22.633 CC lib/blob/zeroes.o 00:02:22.633 CC lib/blob/blob_bs_dev.o 00:02:22.633 CC lib/init/subsystem.o 00:02:22.633 CC lib/init/json_config.o 00:02:22.633 CC lib/init/subsystem_rpc.o 00:02:22.633 CC lib/init/rpc.o 00:02:22.890 LIB libspdk_vfu_tgt.a 00:02:22.890 LIB libspdk_init.a 00:02:22.890 SO libspdk_vfu_tgt.so.3.0 00:02:22.890 LIB libspdk_virtio.a 00:02:22.890 SO libspdk_init.so.5.0 00:02:22.890 SO libspdk_virtio.so.7.0 00:02:22.890 SYMLINK libspdk_vfu_tgt.so 00:02:22.890 SYMLINK libspdk_init.so 00:02:22.890 SYMLINK libspdk_virtio.so 00:02:23.454 CC lib/event/app.o 00:02:23.454 CC lib/event/reactor.o 00:02:23.454 CC lib/event/log_rpc.o 00:02:23.454 CC lib/event/app_rpc.o 00:02:23.454 CC lib/event/scheduler_static.o 00:02:23.454 LIB libspdk_accel.a 00:02:23.454 SO libspdk_accel.so.15.1 00:02:23.454 SYMLINK libspdk_accel.so 00:02:23.454 LIB libspdk_nvme.a 00:02:23.712 SO libspdk_nvme.so.13.1 00:02:23.712 LIB libspdk_event.a 00:02:23.712 SO libspdk_event.so.14.0 00:02:23.712 SYMLINK libspdk_event.so 00:02:23.712 CC lib/bdev/bdev_rpc.o 00:02:23.712 CC lib/bdev/bdev.o 00:02:23.712 CC lib/bdev/part.o 00:02:23.712 CC lib/bdev/bdev_zone.o 00:02:23.712 CC lib/bdev/scsi_nvme.o 00:02:23.968 SYMLINK libspdk_nvme.so 00:02:24.897 LIB libspdk_blob.a 00:02:24.897 SO libspdk_blob.so.11.0 00:02:24.897 SYMLINK libspdk_blob.so 00:02:25.155 CC lib/lvol/lvol.o 00:02:25.155 CC lib/blobfs/blobfs.o 00:02:25.155 CC lib/blobfs/tree.o 00:02:25.720 LIB libspdk_bdev.a 00:02:25.720 SO libspdk_bdev.so.16.0 00:02:25.720 SYMLINK libspdk_bdev.so 00:02:25.720 LIB libspdk_blobfs.a 00:02:25.720 SO libspdk_blobfs.so.10.0 00:02:25.720 LIB libspdk_lvol.a 00:02:25.978 SO libspdk_lvol.so.10.0 00:02:25.978 SYMLINK libspdk_blobfs.so 00:02:25.978 SYMLINK libspdk_lvol.so 00:02:25.978 CC lib/nbd/nbd.o 00:02:25.978 CC lib/ublk/ublk.o 00:02:25.978 CC lib/nbd/nbd_rpc.o 00:02:25.978 CC lib/ublk/ublk_rpc.o 00:02:25.978 CC lib/nvmf/ctrlr.o 00:02:25.978 CC lib/nvmf/ctrlr_discovery.o 00:02:25.978 CC lib/nvmf/ctrlr_bdev.o 00:02:25.978 CC lib/nvmf/subsystem.o 00:02:25.978 CC lib/nvmf/nvmf_rpc.o 00:02:25.978 CC lib/nvmf/nvmf.o 00:02:25.978 CC lib/nvmf/transport.o 00:02:25.978 CC lib/nvmf/tcp.o 00:02:25.978 CC lib/nvmf/stubs.o 00:02:25.978 CC lib/nvmf/mdns_server.o 00:02:25.978 CC lib/nvmf/rdma.o 00:02:25.978 CC lib/scsi/dev.o 00:02:25.978 CC lib/nvmf/vfio_user.o 00:02:25.978 CC lib/scsi/lun.o 00:02:25.978 CC lib/nvmf/auth.o 00:02:25.978 CC lib/ftl/ftl_core.o 00:02:25.978 CC lib/scsi/port.o 00:02:25.978 CC lib/ftl/ftl_init.o 00:02:25.978 CC lib/scsi/scsi.o 00:02:25.978 CC lib/ftl/ftl_layout.o 00:02:25.978 CC lib/scsi/scsi_bdev.o 00:02:25.978 CC lib/ftl/ftl_io.o 00:02:25.978 CC lib/ftl/ftl_debug.o 00:02:25.978 CC lib/scsi/scsi_pr.o 00:02:25.978 CC lib/ftl/ftl_sb.o 00:02:25.978 CC lib/scsi/scsi_rpc.o 00:02:25.978 CC lib/scsi/task.o 00:02:25.978 CC lib/ftl/ftl_l2p.o 00:02:25.978 CC lib/ftl/ftl_l2p_flat.o 00:02:25.978 CC lib/ftl/ftl_nv_cache.o 00:02:25.978 CC lib/ftl/ftl_band.o 00:02:25.978 CC lib/ftl/ftl_band_ops.o 00:02:25.978 CC lib/ftl/ftl_reloc.o 00:02:25.978 CC lib/ftl/ftl_writer.o 00:02:25.978 CC lib/ftl/ftl_rq.o 00:02:25.978 CC lib/ftl/ftl_l2p_cache.o 00:02:25.978 CC lib/ftl/mngt/ftl_mngt.o 00:02:25.978 CC lib/ftl/ftl_p2l.o 00:02:25.978 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:25.978 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:25.978 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:25.978 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:26.235 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:26.235 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:26.235 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:26.235 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:26.235 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:26.235 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:26.235 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:26.235 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:26.235 CC lib/ftl/utils/ftl_conf.o 00:02:26.235 CC lib/ftl/utils/ftl_md.o 00:02:26.235 CC lib/ftl/utils/ftl_mempool.o 00:02:26.235 CC lib/ftl/utils/ftl_property.o 00:02:26.235 CC lib/ftl/utils/ftl_bitmap.o 00:02:26.235 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:26.235 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:26.235 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:26.235 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:26.235 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:26.235 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:26.235 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:26.235 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:26.235 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:26.235 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:26.235 CC lib/ftl/base/ftl_base_dev.o 00:02:26.236 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:26.236 CC lib/ftl/ftl_trace.o 00:02:26.236 CC lib/ftl/base/ftl_base_bdev.o 00:02:26.499 LIB libspdk_nbd.a 00:02:26.499 SO libspdk_nbd.so.7.0 00:02:26.757 SYMLINK libspdk_nbd.so 00:02:26.757 LIB libspdk_scsi.a 00:02:26.757 SO libspdk_scsi.so.9.0 00:02:26.757 LIB libspdk_ublk.a 00:02:26.757 SO libspdk_ublk.so.3.0 00:02:26.757 SYMLINK libspdk_scsi.so 00:02:27.015 SYMLINK libspdk_ublk.so 00:02:27.015 LIB libspdk_ftl.a 00:02:27.273 SO libspdk_ftl.so.9.0 00:02:27.273 CC lib/vhost/vhost.o 00:02:27.273 CC lib/vhost/vhost_rpc.o 00:02:27.273 CC lib/vhost/vhost_scsi.o 00:02:27.273 CC lib/vhost/vhost_blk.o 00:02:27.273 CC lib/vhost/rte_vhost_user.o 00:02:27.273 CC lib/iscsi/conn.o 00:02:27.273 CC lib/iscsi/init_grp.o 00:02:27.273 CC lib/iscsi/iscsi.o 00:02:27.273 CC lib/iscsi/md5.o 00:02:27.273 CC lib/iscsi/param.o 00:02:27.273 CC lib/iscsi/portal_grp.o 00:02:27.273 CC lib/iscsi/tgt_node.o 00:02:27.273 CC lib/iscsi/iscsi_rpc.o 00:02:27.273 CC lib/iscsi/iscsi_subsystem.o 00:02:27.273 CC lib/iscsi/task.o 00:02:27.532 SYMLINK libspdk_ftl.so 00:02:27.789 LIB libspdk_nvmf.a 00:02:27.789 SO libspdk_nvmf.so.18.1 00:02:28.048 SYMLINK libspdk_nvmf.so 00:02:28.048 LIB libspdk_vhost.a 00:02:28.048 SO libspdk_vhost.so.8.0 00:02:28.048 SYMLINK libspdk_vhost.so 00:02:28.306 LIB libspdk_iscsi.a 00:02:28.306 SO libspdk_iscsi.so.8.0 00:02:28.306 SYMLINK libspdk_iscsi.so 00:02:28.870 CC module/vfu_device/vfu_virtio_blk.o 00:02:28.870 CC module/vfu_device/vfu_virtio.o 00:02:28.870 CC module/vfu_device/vfu_virtio_scsi.o 00:02:28.870 CC module/vfu_device/vfu_virtio_rpc.o 00:02:28.870 CC module/env_dpdk/env_dpdk_rpc.o 00:02:29.168 LIB libspdk_env_dpdk_rpc.a 00:02:29.168 CC module/blob/bdev/blob_bdev.o 00:02:29.168 CC module/keyring/file/keyring.o 00:02:29.168 CC module/keyring/linux/keyring.o 00:02:29.168 CC module/keyring/file/keyring_rpc.o 00:02:29.168 CC module/sock/posix/posix.o 00:02:29.168 CC module/scheduler/gscheduler/gscheduler.o 00:02:29.168 CC module/keyring/linux/keyring_rpc.o 00:02:29.168 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:29.168 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:29.169 CC module/accel/ioat/accel_ioat.o 00:02:29.169 CC module/accel/ioat/accel_ioat_rpc.o 00:02:29.169 CC module/accel/iaa/accel_iaa.o 00:02:29.169 CC module/accel/iaa/accel_iaa_rpc.o 00:02:29.169 SO libspdk_env_dpdk_rpc.so.6.0 00:02:29.169 CC module/accel/error/accel_error.o 00:02:29.169 CC module/accel/error/accel_error_rpc.o 00:02:29.169 CC module/accel/dsa/accel_dsa.o 00:02:29.169 CC module/accel/dsa/accel_dsa_rpc.o 00:02:29.169 SYMLINK libspdk_env_dpdk_rpc.so 00:02:29.447 LIB libspdk_scheduler_gscheduler.a 00:02:29.447 LIB libspdk_keyring_linux.a 00:02:29.447 LIB libspdk_keyring_file.a 00:02:29.447 LIB libspdk_scheduler_dpdk_governor.a 00:02:29.447 LIB libspdk_scheduler_dynamic.a 00:02:29.447 SO libspdk_scheduler_gscheduler.so.4.0 00:02:29.447 SO libspdk_keyring_file.so.1.0 00:02:29.447 SO libspdk_keyring_linux.so.1.0 00:02:29.447 LIB libspdk_accel_ioat.a 00:02:29.448 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:29.448 LIB libspdk_accel_error.a 00:02:29.448 SO libspdk_scheduler_dynamic.so.4.0 00:02:29.448 LIB libspdk_accel_iaa.a 00:02:29.448 SO libspdk_accel_ioat.so.6.0 00:02:29.448 LIB libspdk_blob_bdev.a 00:02:29.448 SYMLINK libspdk_keyring_file.so 00:02:29.448 SYMLINK libspdk_scheduler_gscheduler.so 00:02:29.448 SO libspdk_accel_error.so.2.0 00:02:29.448 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:29.448 SO libspdk_accel_iaa.so.3.0 00:02:29.448 LIB libspdk_accel_dsa.a 00:02:29.448 SYMLINK libspdk_keyring_linux.so 00:02:29.448 SYMLINK libspdk_scheduler_dynamic.so 00:02:29.448 SO libspdk_blob_bdev.so.11.0 00:02:29.448 SYMLINK libspdk_accel_error.so 00:02:29.448 SYMLINK libspdk_accel_ioat.so 00:02:29.448 LIB libspdk_vfu_device.a 00:02:29.448 SO libspdk_accel_dsa.so.5.0 00:02:29.448 SYMLINK libspdk_accel_iaa.so 00:02:29.448 SYMLINK libspdk_blob_bdev.so 00:02:29.448 SO libspdk_vfu_device.so.3.0 00:02:29.448 SYMLINK libspdk_accel_dsa.so 00:02:29.706 SYMLINK libspdk_vfu_device.so 00:02:29.706 LIB libspdk_sock_posix.a 00:02:29.706 SO libspdk_sock_posix.so.6.0 00:02:29.706 SYMLINK libspdk_sock_posix.so 00:02:29.964 CC module/bdev/lvol/vbdev_lvol.o 00:02:29.964 CC module/bdev/gpt/gpt.o 00:02:29.964 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:29.964 CC module/bdev/gpt/vbdev_gpt.o 00:02:29.964 CC module/bdev/ftl/bdev_ftl.o 00:02:29.964 CC module/bdev/error/vbdev_error.o 00:02:29.964 CC module/bdev/aio/bdev_aio.o 00:02:29.964 CC module/bdev/null/bdev_null.o 00:02:29.964 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:29.964 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:29.964 CC module/bdev/malloc/bdev_malloc.o 00:02:29.964 CC module/bdev/delay/vbdev_delay.o 00:02:29.964 CC module/bdev/null/bdev_null_rpc.o 00:02:29.964 CC module/bdev/aio/bdev_aio_rpc.o 00:02:29.964 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:29.964 CC module/bdev/error/vbdev_error_rpc.o 00:02:29.964 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:29.964 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:29.964 CC module/bdev/passthru/vbdev_passthru.o 00:02:29.964 CC module/bdev/nvme/bdev_nvme.o 00:02:29.964 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:29.964 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:29.964 CC module/bdev/nvme/nvme_rpc.o 00:02:29.964 CC module/bdev/nvme/bdev_mdns_client.o 00:02:29.964 CC module/bdev/split/vbdev_split_rpc.o 00:02:29.964 CC module/bdev/split/vbdev_split.o 00:02:29.964 CC module/bdev/nvme/vbdev_opal.o 00:02:29.964 CC module/bdev/raid/bdev_raid.o 00:02:29.964 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:29.964 CC module/bdev/raid/bdev_raid_rpc.o 00:02:29.964 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:29.964 CC module/bdev/raid/bdev_raid_sb.o 00:02:29.964 CC module/bdev/raid/raid0.o 00:02:29.964 CC module/bdev/iscsi/bdev_iscsi.o 00:02:29.964 CC module/bdev/raid/raid1.o 00:02:29.964 CC module/blobfs/bdev/blobfs_bdev.o 00:02:29.964 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:29.964 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:29.964 CC module/bdev/raid/concat.o 00:02:29.964 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:29.964 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:29.964 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:30.221 LIB libspdk_blobfs_bdev.a 00:02:30.221 LIB libspdk_bdev_null.a 00:02:30.221 LIB libspdk_bdev_split.a 00:02:30.221 SO libspdk_blobfs_bdev.so.6.0 00:02:30.221 LIB libspdk_bdev_error.a 00:02:30.221 LIB libspdk_bdev_gpt.a 00:02:30.221 SO libspdk_bdev_split.so.6.0 00:02:30.221 LIB libspdk_bdev_ftl.a 00:02:30.221 SO libspdk_bdev_null.so.6.0 00:02:30.221 SO libspdk_bdev_error.so.6.0 00:02:30.479 SO libspdk_bdev_gpt.so.6.0 00:02:30.479 LIB libspdk_bdev_aio.a 00:02:30.479 LIB libspdk_bdev_passthru.a 00:02:30.479 LIB libspdk_bdev_zone_block.a 00:02:30.479 SO libspdk_bdev_ftl.so.6.0 00:02:30.479 SYMLINK libspdk_blobfs_bdev.so 00:02:30.479 LIB libspdk_bdev_malloc.a 00:02:30.479 SO libspdk_bdev_passthru.so.6.0 00:02:30.479 SO libspdk_bdev_aio.so.6.0 00:02:30.479 SYMLINK libspdk_bdev_split.so 00:02:30.479 LIB libspdk_bdev_delay.a 00:02:30.479 LIB libspdk_bdev_iscsi.a 00:02:30.479 SYMLINK libspdk_bdev_gpt.so 00:02:30.479 SYMLINK libspdk_bdev_null.so 00:02:30.479 SYMLINK libspdk_bdev_error.so 00:02:30.479 SO libspdk_bdev_zone_block.so.6.0 00:02:30.479 SO libspdk_bdev_malloc.so.6.0 00:02:30.479 SYMLINK libspdk_bdev_ftl.so 00:02:30.479 SO libspdk_bdev_delay.so.6.0 00:02:30.479 SO libspdk_bdev_iscsi.so.6.0 00:02:30.479 SYMLINK libspdk_bdev_passthru.so 00:02:30.479 SYMLINK libspdk_bdev_aio.so 00:02:30.479 SYMLINK libspdk_bdev_zone_block.so 00:02:30.479 SYMLINK libspdk_bdev_malloc.so 00:02:30.479 LIB libspdk_bdev_lvol.a 00:02:30.479 SYMLINK libspdk_bdev_delay.so 00:02:30.479 LIB libspdk_bdev_virtio.a 00:02:30.479 SYMLINK libspdk_bdev_iscsi.so 00:02:30.479 SO libspdk_bdev_lvol.so.6.0 00:02:30.479 SO libspdk_bdev_virtio.so.6.0 00:02:30.737 SYMLINK libspdk_bdev_lvol.so 00:02:30.737 SYMLINK libspdk_bdev_virtio.so 00:02:30.737 LIB libspdk_bdev_raid.a 00:02:30.738 SO libspdk_bdev_raid.so.6.0 00:02:30.996 SYMLINK libspdk_bdev_raid.so 00:02:31.563 LIB libspdk_bdev_nvme.a 00:02:31.563 SO libspdk_bdev_nvme.so.7.0 00:02:31.821 SYMLINK libspdk_bdev_nvme.so 00:02:32.388 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:32.388 CC module/event/subsystems/iobuf/iobuf.o 00:02:32.388 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:32.388 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:32.646 CC module/event/subsystems/sock/sock.o 00:02:32.646 CC module/event/subsystems/keyring/keyring.o 00:02:32.646 CC module/event/subsystems/vmd/vmd.o 00:02:32.646 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:32.646 CC module/event/subsystems/scheduler/scheduler.o 00:02:32.646 LIB libspdk_event_vfu_tgt.a 00:02:32.646 LIB libspdk_event_vhost_blk.a 00:02:32.646 LIB libspdk_event_iobuf.a 00:02:32.646 LIB libspdk_event_sock.a 00:02:32.646 LIB libspdk_event_keyring.a 00:02:32.646 LIB libspdk_event_vmd.a 00:02:32.646 SO libspdk_event_vfu_tgt.so.3.0 00:02:32.646 LIB libspdk_event_scheduler.a 00:02:32.646 SO libspdk_event_vhost_blk.so.3.0 00:02:32.646 SO libspdk_event_keyring.so.1.0 00:02:32.646 SO libspdk_event_sock.so.5.0 00:02:32.646 SO libspdk_event_iobuf.so.3.0 00:02:32.646 SO libspdk_event_vmd.so.6.0 00:02:32.646 SO libspdk_event_scheduler.so.4.0 00:02:32.646 SYMLINK libspdk_event_vfu_tgt.so 00:02:32.646 SYMLINK libspdk_event_vhost_blk.so 00:02:32.646 SYMLINK libspdk_event_sock.so 00:02:32.646 SYMLINK libspdk_event_keyring.so 00:02:32.646 SYMLINK libspdk_event_iobuf.so 00:02:32.904 SYMLINK libspdk_event_vmd.so 00:02:32.904 SYMLINK libspdk_event_scheduler.so 00:02:33.160 CC module/event/subsystems/accel/accel.o 00:02:33.160 LIB libspdk_event_accel.a 00:02:33.160 SO libspdk_event_accel.so.6.0 00:02:33.417 SYMLINK libspdk_event_accel.so 00:02:33.674 CC module/event/subsystems/bdev/bdev.o 00:02:33.931 LIB libspdk_event_bdev.a 00:02:33.931 SO libspdk_event_bdev.so.6.0 00:02:33.931 SYMLINK libspdk_event_bdev.so 00:02:34.187 CC module/event/subsystems/nbd/nbd.o 00:02:34.187 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:34.187 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:34.187 CC module/event/subsystems/scsi/scsi.o 00:02:34.443 CC module/event/subsystems/ublk/ublk.o 00:02:34.443 LIB libspdk_event_nbd.a 00:02:34.443 LIB libspdk_event_scsi.a 00:02:34.443 LIB libspdk_event_ublk.a 00:02:34.443 SO libspdk_event_nbd.so.6.0 00:02:34.443 SO libspdk_event_scsi.so.6.0 00:02:34.443 LIB libspdk_event_nvmf.a 00:02:34.443 SO libspdk_event_ublk.so.3.0 00:02:34.443 SYMLINK libspdk_event_nbd.so 00:02:34.443 SO libspdk_event_nvmf.so.6.0 00:02:34.443 SYMLINK libspdk_event_scsi.so 00:02:34.700 SYMLINK libspdk_event_ublk.so 00:02:34.700 SYMLINK libspdk_event_nvmf.so 00:02:34.957 CC module/event/subsystems/iscsi/iscsi.o 00:02:34.957 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:34.957 LIB libspdk_event_vhost_scsi.a 00:02:34.957 LIB libspdk_event_iscsi.a 00:02:35.214 SO libspdk_event_vhost_scsi.so.3.0 00:02:35.214 SO libspdk_event_iscsi.so.6.0 00:02:35.214 SYMLINK libspdk_event_vhost_scsi.so 00:02:35.214 SYMLINK libspdk_event_iscsi.so 00:02:35.471 SO libspdk.so.6.0 00:02:35.471 SYMLINK libspdk.so 00:02:35.729 CC app/spdk_nvme_identify/identify.o 00:02:35.729 CXX app/trace/trace.o 00:02:35.729 CC app/trace_record/trace_record.o 00:02:35.729 CC app/spdk_nvme_perf/perf.o 00:02:35.729 CC app/spdk_nvme_discover/discovery_aer.o 00:02:35.729 CC app/spdk_lspci/spdk_lspci.o 00:02:35.729 CC test/rpc_client/rpc_client_test.o 00:02:35.729 TEST_HEADER include/spdk/accel.h 00:02:35.729 TEST_HEADER include/spdk/accel_module.h 00:02:35.729 TEST_HEADER include/spdk/assert.h 00:02:35.729 TEST_HEADER include/spdk/base64.h 00:02:35.729 TEST_HEADER include/spdk/barrier.h 00:02:35.729 TEST_HEADER include/spdk/bdev_zone.h 00:02:35.729 TEST_HEADER include/spdk/bdev.h 00:02:35.729 TEST_HEADER include/spdk/bdev_module.h 00:02:35.729 TEST_HEADER include/spdk/bit_pool.h 00:02:35.729 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:35.729 TEST_HEADER include/spdk/bit_array.h 00:02:35.729 CC app/spdk_top/spdk_top.o 00:02:35.729 TEST_HEADER include/spdk/blob_bdev.h 00:02:35.729 TEST_HEADER include/spdk/blob.h 00:02:35.729 TEST_HEADER include/spdk/blobfs.h 00:02:35.729 TEST_HEADER include/spdk/config.h 00:02:35.729 TEST_HEADER include/spdk/cpuset.h 00:02:35.729 TEST_HEADER include/spdk/conf.h 00:02:35.729 TEST_HEADER include/spdk/crc16.h 00:02:35.729 TEST_HEADER include/spdk/crc32.h 00:02:35.729 TEST_HEADER include/spdk/crc64.h 00:02:35.729 TEST_HEADER include/spdk/dif.h 00:02:35.729 TEST_HEADER include/spdk/dma.h 00:02:35.729 TEST_HEADER include/spdk/env_dpdk.h 00:02:35.729 TEST_HEADER include/spdk/endian.h 00:02:35.729 TEST_HEADER include/spdk/env.h 00:02:35.729 TEST_HEADER include/spdk/fd_group.h 00:02:35.729 TEST_HEADER include/spdk/event.h 00:02:35.729 TEST_HEADER include/spdk/fd.h 00:02:35.729 TEST_HEADER include/spdk/ftl.h 00:02:35.729 TEST_HEADER include/spdk/gpt_spec.h 00:02:35.729 TEST_HEADER include/spdk/file.h 00:02:35.729 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:35.729 TEST_HEADER include/spdk/hexlify.h 00:02:35.729 CC app/nvmf_tgt/nvmf_main.o 00:02:35.729 TEST_HEADER include/spdk/histogram_data.h 00:02:35.729 CC app/spdk_dd/spdk_dd.o 00:02:35.729 TEST_HEADER include/spdk/idxd.h 00:02:35.729 CC app/iscsi_tgt/iscsi_tgt.o 00:02:35.729 TEST_HEADER include/spdk/idxd_spec.h 00:02:35.729 TEST_HEADER include/spdk/ioat.h 00:02:35.729 TEST_HEADER include/spdk/init.h 00:02:35.729 TEST_HEADER include/spdk/ioat_spec.h 00:02:35.729 TEST_HEADER include/spdk/iscsi_spec.h 00:02:35.729 TEST_HEADER include/spdk/json.h 00:02:35.729 TEST_HEADER include/spdk/jsonrpc.h 00:02:35.729 TEST_HEADER include/spdk/keyring.h 00:02:35.729 TEST_HEADER include/spdk/keyring_module.h 00:02:35.729 TEST_HEADER include/spdk/log.h 00:02:35.729 TEST_HEADER include/spdk/lvol.h 00:02:35.729 TEST_HEADER include/spdk/likely.h 00:02:35.729 TEST_HEADER include/spdk/memory.h 00:02:35.729 TEST_HEADER include/spdk/mmio.h 00:02:35.729 TEST_HEADER include/spdk/nbd.h 00:02:35.729 TEST_HEADER include/spdk/notify.h 00:02:35.729 TEST_HEADER include/spdk/nvme_intel.h 00:02:35.729 TEST_HEADER include/spdk/nvme.h 00:02:35.729 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:35.729 CC app/spdk_tgt/spdk_tgt.o 00:02:35.729 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:35.729 TEST_HEADER include/spdk/nvme_zns.h 00:02:35.729 TEST_HEADER include/spdk/nvme_spec.h 00:02:35.729 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:35.729 TEST_HEADER include/spdk/nvmf_spec.h 00:02:35.729 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:35.729 TEST_HEADER include/spdk/nvmf.h 00:02:35.729 TEST_HEADER include/spdk/opal_spec.h 00:02:35.729 TEST_HEADER include/spdk/nvmf_transport.h 00:02:35.729 TEST_HEADER include/spdk/opal.h 00:02:35.729 TEST_HEADER include/spdk/pci_ids.h 00:02:35.729 TEST_HEADER include/spdk/pipe.h 00:02:35.729 TEST_HEADER include/spdk/queue.h 00:02:35.729 TEST_HEADER include/spdk/reduce.h 00:02:35.729 TEST_HEADER include/spdk/scheduler.h 00:02:35.729 TEST_HEADER include/spdk/scsi.h 00:02:35.729 TEST_HEADER include/spdk/rpc.h 00:02:35.729 TEST_HEADER include/spdk/sock.h 00:02:35.729 TEST_HEADER include/spdk/scsi_spec.h 00:02:35.729 TEST_HEADER include/spdk/stdinc.h 00:02:35.729 TEST_HEADER include/spdk/string.h 00:02:35.729 TEST_HEADER include/spdk/trace.h 00:02:35.729 TEST_HEADER include/spdk/trace_parser.h 00:02:35.729 TEST_HEADER include/spdk/thread.h 00:02:35.729 TEST_HEADER include/spdk/ublk.h 00:02:35.729 TEST_HEADER include/spdk/util.h 00:02:35.729 TEST_HEADER include/spdk/tree.h 00:02:35.729 TEST_HEADER include/spdk/uuid.h 00:02:35.729 TEST_HEADER include/spdk/version.h 00:02:35.729 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:35.729 TEST_HEADER include/spdk/vmd.h 00:02:35.729 TEST_HEADER include/spdk/xor.h 00:02:35.729 TEST_HEADER include/spdk/vhost.h 00:02:35.729 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:35.729 TEST_HEADER include/spdk/zipf.h 00:02:35.729 CXX test/cpp_headers/accel.o 00:02:35.729 CXX test/cpp_headers/accel_module.o 00:02:35.729 CXX test/cpp_headers/assert.o 00:02:35.729 CXX test/cpp_headers/base64.o 00:02:35.729 CXX test/cpp_headers/bdev.o 00:02:35.729 CXX test/cpp_headers/barrier.o 00:02:35.729 CXX test/cpp_headers/bdev_zone.o 00:02:35.729 CXX test/cpp_headers/bit_array.o 00:02:35.729 CXX test/cpp_headers/bdev_module.o 00:02:35.729 CXX test/cpp_headers/bit_pool.o 00:02:35.729 CXX test/cpp_headers/blob_bdev.o 00:02:35.994 CXX test/cpp_headers/blobfs_bdev.o 00:02:35.994 CXX test/cpp_headers/blob.o 00:02:35.994 CXX test/cpp_headers/blobfs.o 00:02:35.994 CXX test/cpp_headers/conf.o 00:02:35.994 CXX test/cpp_headers/crc16.o 00:02:35.994 CXX test/cpp_headers/cpuset.o 00:02:35.994 CXX test/cpp_headers/config.o 00:02:35.994 CXX test/cpp_headers/crc32.o 00:02:35.994 CXX test/cpp_headers/endian.o 00:02:35.994 CXX test/cpp_headers/dma.o 00:02:35.994 CXX test/cpp_headers/dif.o 00:02:35.994 CXX test/cpp_headers/crc64.o 00:02:35.994 CXX test/cpp_headers/env_dpdk.o 00:02:35.994 CXX test/cpp_headers/env.o 00:02:35.994 CXX test/cpp_headers/fd.o 00:02:35.994 CXX test/cpp_headers/event.o 00:02:35.994 CXX test/cpp_headers/fd_group.o 00:02:35.994 CXX test/cpp_headers/file.o 00:02:35.994 CXX test/cpp_headers/ftl.o 00:02:35.994 CXX test/cpp_headers/gpt_spec.o 00:02:35.994 CXX test/cpp_headers/hexlify.o 00:02:35.994 CXX test/cpp_headers/histogram_data.o 00:02:35.994 CXX test/cpp_headers/idxd.o 00:02:35.994 CXX test/cpp_headers/init.o 00:02:35.994 CXX test/cpp_headers/idxd_spec.o 00:02:35.995 CXX test/cpp_headers/ioat.o 00:02:35.995 CXX test/cpp_headers/iscsi_spec.o 00:02:35.995 CXX test/cpp_headers/ioat_spec.o 00:02:35.995 CXX test/cpp_headers/jsonrpc.o 00:02:35.995 CXX test/cpp_headers/json.o 00:02:35.995 CXX test/cpp_headers/keyring.o 00:02:35.995 CXX test/cpp_headers/keyring_module.o 00:02:35.995 CXX test/cpp_headers/likely.o 00:02:35.995 CXX test/cpp_headers/lvol.o 00:02:35.995 CXX test/cpp_headers/log.o 00:02:35.995 CXX test/cpp_headers/mmio.o 00:02:35.995 CXX test/cpp_headers/memory.o 00:02:35.995 CXX test/cpp_headers/nbd.o 00:02:35.995 CXX test/cpp_headers/notify.o 00:02:35.995 CXX test/cpp_headers/nvme.o 00:02:35.995 CXX test/cpp_headers/nvme_intel.o 00:02:35.995 CXX test/cpp_headers/nvme_ocssd.o 00:02:35.995 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:35.995 CXX test/cpp_headers/nvme_spec.o 00:02:35.995 CXX test/cpp_headers/nvme_zns.o 00:02:35.995 CXX test/cpp_headers/nvmf_cmd.o 00:02:35.995 CXX test/cpp_headers/nvmf_spec.o 00:02:35.995 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:35.995 CXX test/cpp_headers/nvmf.o 00:02:35.995 CXX test/cpp_headers/nvmf_transport.o 00:02:35.995 CXX test/cpp_headers/opal.o 00:02:35.995 CXX test/cpp_headers/opal_spec.o 00:02:35.995 CXX test/cpp_headers/pci_ids.o 00:02:35.995 CXX test/cpp_headers/pipe.o 00:02:35.995 CXX test/cpp_headers/queue.o 00:02:35.995 CXX test/cpp_headers/reduce.o 00:02:35.995 CXX test/cpp_headers/rpc.o 00:02:35.995 CXX test/cpp_headers/scheduler.o 00:02:35.995 CXX test/cpp_headers/scsi.o 00:02:35.995 CXX test/cpp_headers/scsi_spec.o 00:02:35.995 CXX test/cpp_headers/sock.o 00:02:35.995 CXX test/cpp_headers/stdinc.o 00:02:35.995 CXX test/cpp_headers/thread.o 00:02:35.995 CXX test/cpp_headers/string.o 00:02:35.995 CXX test/cpp_headers/trace.o 00:02:35.995 CXX test/cpp_headers/trace_parser.o 00:02:35.995 CXX test/cpp_headers/tree.o 00:02:35.995 CXX test/cpp_headers/ublk.o 00:02:35.995 CXX test/cpp_headers/util.o 00:02:35.995 CC test/thread/poller_perf/poller_perf.o 00:02:35.995 CXX test/cpp_headers/uuid.o 00:02:35.995 CXX test/cpp_headers/version.o 00:02:35.995 CXX test/cpp_headers/vfio_user_pci.o 00:02:35.995 CC examples/util/zipf/zipf.o 00:02:35.995 CC examples/ioat/verify/verify.o 00:02:35.995 CC app/fio/nvme/fio_plugin.o 00:02:35.995 CC test/app/jsoncat/jsoncat.o 00:02:35.995 CC test/env/pci/pci_ut.o 00:02:35.995 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:35.995 CXX test/cpp_headers/vfio_user_spec.o 00:02:35.995 CC test/env/vtophys/vtophys.o 00:02:35.995 CC test/dma/test_dma/test_dma.o 00:02:35.995 CC examples/ioat/perf/perf.o 00:02:35.995 CC test/app/histogram_perf/histogram_perf.o 00:02:35.995 CC test/env/memory/memory_ut.o 00:02:35.995 CXX test/cpp_headers/vhost.o 00:02:35.995 CC test/app/stub/stub.o 00:02:35.995 LINK spdk_lspci 00:02:36.280 CC test/app/bdev_svc/bdev_svc.o 00:02:36.280 CC app/fio/bdev/fio_plugin.o 00:02:36.280 CXX test/cpp_headers/vmd.o 00:02:36.280 LINK nvmf_tgt 00:02:36.553 LINK interrupt_tgt 00:02:36.553 LINK rpc_client_test 00:02:36.553 LINK iscsi_tgt 00:02:36.553 LINK spdk_nvme_discover 00:02:36.553 LINK spdk_trace_record 00:02:36.553 LINK spdk_tgt 00:02:36.553 CC test/env/mem_callbacks/mem_callbacks.o 00:02:36.811 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:36.811 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:36.811 LINK vtophys 00:02:36.811 LINK poller_perf 00:02:36.811 LINK env_dpdk_post_init 00:02:36.811 LINK jsoncat 00:02:36.811 CXX test/cpp_headers/xor.o 00:02:36.811 CXX test/cpp_headers/zipf.o 00:02:36.811 LINK zipf 00:02:36.811 LINK histogram_perf 00:02:36.811 LINK stub 00:02:36.811 LINK bdev_svc 00:02:36.811 LINK verify 00:02:36.811 LINK ioat_perf 00:02:36.811 LINK spdk_dd 00:02:36.811 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:36.811 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:36.811 LINK spdk_trace 00:02:37.069 LINK test_dma 00:02:37.069 LINK pci_ut 00:02:37.069 LINK spdk_nvme 00:02:37.069 LINK spdk_bdev 00:02:37.069 LINK nvme_fuzz 00:02:37.069 LINK spdk_nvme_identify 00:02:37.069 LINK spdk_nvme_perf 00:02:37.327 LINK vhost_fuzz 00:02:37.327 LINK spdk_top 00:02:37.327 CC test/event/event_perf/event_perf.o 00:02:37.327 LINK mem_callbacks 00:02:37.327 CC test/event/reactor/reactor.o 00:02:37.327 CC examples/vmd/lsvmd/lsvmd.o 00:02:37.327 CC examples/vmd/led/led.o 00:02:37.327 CC examples/sock/hello_world/hello_sock.o 00:02:37.327 CC test/event/reactor_perf/reactor_perf.o 00:02:37.327 CC examples/idxd/perf/perf.o 00:02:37.327 CC app/vhost/vhost.o 00:02:37.327 CC test/event/app_repeat/app_repeat.o 00:02:37.327 CC test/event/scheduler/scheduler.o 00:02:37.327 CC examples/thread/thread/thread_ex.o 00:02:37.327 LINK event_perf 00:02:37.327 LINK lsvmd 00:02:37.327 LINK reactor_perf 00:02:37.327 LINK led 00:02:37.327 LINK reactor 00:02:37.585 LINK app_repeat 00:02:37.585 LINK vhost 00:02:37.585 CC test/nvme/reset/reset.o 00:02:37.585 CC test/nvme/sgl/sgl.o 00:02:37.585 CC test/nvme/reserve/reserve.o 00:02:37.585 CC test/nvme/simple_copy/simple_copy.o 00:02:37.585 CC test/nvme/e2edp/nvme_dp.o 00:02:37.585 CC test/nvme/overhead/overhead.o 00:02:37.585 CC test/nvme/fused_ordering/fused_ordering.o 00:02:37.585 CC test/nvme/startup/startup.o 00:02:37.585 CC test/nvme/boot_partition/boot_partition.o 00:02:37.585 CC test/blobfs/mkfs/mkfs.o 00:02:37.585 CC test/nvme/err_injection/err_injection.o 00:02:37.585 LINK hello_sock 00:02:37.585 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:37.585 CC test/nvme/compliance/nvme_compliance.o 00:02:37.585 CC test/nvme/cuse/cuse.o 00:02:37.585 CC test/nvme/fdp/fdp.o 00:02:37.585 CC test/nvme/aer/aer.o 00:02:37.585 CC test/nvme/connect_stress/connect_stress.o 00:02:37.585 CC test/accel/dif/dif.o 00:02:37.585 LINK memory_ut 00:02:37.585 LINK scheduler 00:02:37.585 LINK thread 00:02:37.585 LINK idxd_perf 00:02:37.585 CC test/lvol/esnap/esnap.o 00:02:37.585 LINK startup 00:02:37.843 LINK reserve 00:02:37.843 LINK boot_partition 00:02:37.843 LINK fused_ordering 00:02:37.843 LINK err_injection 00:02:37.843 LINK doorbell_aers 00:02:37.843 LINK connect_stress 00:02:37.843 LINK mkfs 00:02:37.843 LINK simple_copy 00:02:37.843 LINK reset 00:02:37.843 LINK overhead 00:02:37.843 LINK nvme_dp 00:02:37.843 LINK sgl 00:02:37.843 LINK aer 00:02:37.843 LINK nvme_compliance 00:02:37.843 LINK fdp 00:02:37.843 LINK dif 00:02:38.101 CC examples/nvme/hello_world/hello_world.o 00:02:38.101 CC examples/nvme/reconnect/reconnect.o 00:02:38.101 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:38.101 CC examples/nvme/abort/abort.o 00:02:38.101 CC examples/nvme/hotplug/hotplug.o 00:02:38.101 CC examples/nvme/arbitration/arbitration.o 00:02:38.101 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:38.101 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:38.101 LINK iscsi_fuzz 00:02:38.101 CC examples/accel/perf/accel_perf.o 00:02:38.101 CC examples/blob/cli/blobcli.o 00:02:38.101 CC examples/blob/hello_world/hello_blob.o 00:02:38.101 LINK pmr_persistence 00:02:38.101 LINK cmb_copy 00:02:38.101 LINK hello_world 00:02:38.101 LINK hotplug 00:02:38.359 LINK reconnect 00:02:38.359 LINK arbitration 00:02:38.359 LINK abort 00:02:38.359 LINK nvme_manage 00:02:38.359 LINK hello_blob 00:02:38.359 CC test/bdev/bdevio/bdevio.o 00:02:38.617 LINK accel_perf 00:02:38.617 LINK cuse 00:02:38.617 LINK blobcli 00:02:38.874 LINK bdevio 00:02:39.132 CC examples/bdev/hello_world/hello_bdev.o 00:02:39.132 CC examples/bdev/bdevperf/bdevperf.o 00:02:39.389 LINK hello_bdev 00:02:39.646 LINK bdevperf 00:02:40.212 CC examples/nvmf/nvmf/nvmf.o 00:02:40.470 LINK nvmf 00:02:41.036 LINK esnap 00:02:41.293 00:02:41.293 real 0m49.737s 00:02:41.293 user 6m27.900s 00:02:41.294 sys 4m21.427s 00:02:41.294 11:29:09 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:41.294 11:29:09 make -- common/autotest_common.sh@10 -- $ set +x 00:02:41.294 ************************************ 00:02:41.294 END TEST make 00:02:41.294 ************************************ 00:02:41.294 11:29:09 -- common/autotest_common.sh@1142 -- $ return 0 00:02:41.294 11:29:09 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:41.294 11:29:09 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:41.294 11:29:09 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:41.294 11:29:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.294 11:29:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:41.294 11:29:09 -- pm/common@44 -- $ pid=1664939 00:02:41.294 11:29:09 -- pm/common@50 -- $ kill -TERM 1664939 00:02:41.294 11:29:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.294 11:29:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:41.294 11:29:09 -- pm/common@44 -- $ pid=1664941 00:02:41.294 11:29:09 -- pm/common@50 -- $ kill -TERM 1664941 00:02:41.294 11:29:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.294 11:29:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:41.294 11:29:09 -- pm/common@44 -- $ pid=1664943 00:02:41.294 11:29:09 -- pm/common@50 -- $ kill -TERM 1664943 00:02:41.294 11:29:09 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.294 11:29:09 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:41.294 11:29:09 -- pm/common@44 -- $ pid=1664967 00:02:41.294 11:29:09 -- pm/common@50 -- $ sudo -E kill -TERM 1664967 00:02:41.553 11:29:09 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:41.553 11:29:09 -- nvmf/common.sh@7 -- # uname -s 00:02:41.553 11:29:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:41.553 11:29:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:41.553 11:29:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:41.553 11:29:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:41.553 11:29:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:41.553 11:29:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:41.553 11:29:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:41.553 11:29:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:41.553 11:29:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:41.553 11:29:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:41.553 11:29:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:02:41.553 11:29:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:02:41.553 11:29:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:41.553 11:29:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:41.553 11:29:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:41.553 11:29:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:41.553 11:29:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:41.553 11:29:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:41.553 11:29:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:41.553 11:29:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:41.553 11:29:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.553 11:29:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.553 11:29:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.553 11:29:09 -- paths/export.sh@5 -- # export PATH 00:02:41.553 11:29:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:41.553 11:29:09 -- nvmf/common.sh@47 -- # : 0 00:02:41.553 11:29:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:41.553 11:29:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:41.553 11:29:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:41.553 11:29:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:41.553 11:29:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:41.553 11:29:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:41.553 11:29:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:41.553 11:29:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:41.553 11:29:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:41.553 11:29:09 -- spdk/autotest.sh@32 -- # uname -s 00:02:41.553 11:29:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:41.553 11:29:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:41.553 11:29:09 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:41.553 11:29:09 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:41.553 11:29:09 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:41.553 11:29:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:41.553 11:29:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:41.553 11:29:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:41.553 11:29:09 -- spdk/autotest.sh@48 -- # udevadm_pid=1725963 00:02:41.553 11:29:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:41.553 11:29:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:41.553 11:29:09 -- pm/common@17 -- # local monitor 00:02:41.553 11:29:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.553 11:29:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.553 11:29:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.553 11:29:09 -- pm/common@21 -- # date +%s 00:02:41.553 11:29:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:41.553 11:29:09 -- pm/common@21 -- # date +%s 00:02:41.553 11:29:09 -- pm/common@25 -- # sleep 1 00:02:41.553 11:29:09 -- pm/common@21 -- # date +%s 00:02:41.553 11:29:09 -- pm/common@21 -- # date +%s 00:02:41.553 11:29:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721035749 00:02:41.553 11:29:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721035749 00:02:41.553 11:29:09 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721035749 00:02:41.553 11:29:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721035749 00:02:41.553 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721035749_collect-vmstat.pm.log 00:02:41.553 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721035749_collect-cpu-load.pm.log 00:02:41.553 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721035749_collect-cpu-temp.pm.log 00:02:41.553 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721035749_collect-bmc-pm.bmc.pm.log 00:02:42.489 11:29:10 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:42.489 11:29:10 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:42.489 11:29:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:42.489 11:29:10 -- common/autotest_common.sh@10 -- # set +x 00:02:42.489 11:29:10 -- spdk/autotest.sh@59 -- # create_test_list 00:02:42.489 11:29:10 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:42.489 11:29:10 -- common/autotest_common.sh@10 -- # set +x 00:02:42.489 11:29:10 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:42.489 11:29:10 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:42.489 11:29:10 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:42.489 11:29:10 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:42.489 11:29:10 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:42.489 11:29:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:42.489 11:29:10 -- common/autotest_common.sh@1455 -- # uname 00:02:42.489 11:29:10 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:42.489 11:29:10 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:42.489 11:29:10 -- common/autotest_common.sh@1475 -- # uname 00:02:42.489 11:29:10 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:42.489 11:29:10 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:42.489 11:29:10 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:42.489 11:29:10 -- spdk/autotest.sh@72 -- # hash lcov 00:02:42.489 11:29:10 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:42.489 11:29:10 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:42.489 --rc lcov_branch_coverage=1 00:02:42.489 --rc lcov_function_coverage=1 00:02:42.489 --rc genhtml_branch_coverage=1 00:02:42.489 --rc genhtml_function_coverage=1 00:02:42.489 --rc genhtml_legend=1 00:02:42.489 --rc geninfo_all_blocks=1 00:02:42.489 ' 00:02:42.489 11:29:10 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:42.489 --rc lcov_branch_coverage=1 00:02:42.489 --rc lcov_function_coverage=1 00:02:42.489 --rc genhtml_branch_coverage=1 00:02:42.489 --rc genhtml_function_coverage=1 00:02:42.489 --rc genhtml_legend=1 00:02:42.489 --rc geninfo_all_blocks=1 00:02:42.489 ' 00:02:42.489 11:29:10 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:42.489 --rc lcov_branch_coverage=1 00:02:42.489 --rc lcov_function_coverage=1 00:02:42.489 --rc genhtml_branch_coverage=1 00:02:42.489 --rc genhtml_function_coverage=1 00:02:42.489 --rc genhtml_legend=1 00:02:42.489 --rc geninfo_all_blocks=1 00:02:42.489 --no-external' 00:02:42.489 11:29:10 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:42.489 --rc lcov_branch_coverage=1 00:02:42.489 --rc lcov_function_coverage=1 00:02:42.489 --rc genhtml_branch_coverage=1 00:02:42.489 --rc genhtml_function_coverage=1 00:02:42.489 --rc genhtml_legend=1 00:02:42.489 --rc geninfo_all_blocks=1 00:02:42.489 --no-external' 00:02:42.489 11:29:10 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:42.748 lcov: LCOV version 1.14 00:02:42.748 11:29:10 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:44.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:44.126 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:44.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:44.126 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:44.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:44.126 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:44.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:44.126 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:44.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:44.126 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:44.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:44.126 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:44.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:44.126 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:44.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:44.126 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:44.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:44.126 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:44.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:44.126 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:44.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:44.126 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:44.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:44.126 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:44.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:44.126 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:44.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:44.126 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:44.126 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:44.126 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:44.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:44.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:44.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:44.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:44.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:44.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:44.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:44.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:44.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:44.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:44.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:44.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:44.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:44.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:44.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:44.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:44.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:44.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:44.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:44.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:44.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:44.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:44.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:44.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:44.387 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:44.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:44.647 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:56.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:56.939 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:09.150 11:29:35 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:09.150 11:29:35 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:09.150 11:29:35 -- common/autotest_common.sh@10 -- # set +x 00:03:09.150 11:29:35 -- spdk/autotest.sh@91 -- # rm -f 00:03:09.150 11:29:35 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.056 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:11.056 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:11.056 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:11.056 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:11.056 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:11.056 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:11.056 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:11.056 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:11.056 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:11.056 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:11.056 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:11.314 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:11.314 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:11.314 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:11.314 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:11.314 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:11.314 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:11.314 11:29:39 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:11.314 11:29:39 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:11.314 11:29:39 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:11.314 11:29:39 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:11.314 11:29:39 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:11.314 11:29:39 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:11.314 11:29:39 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:11.315 11:29:39 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:11.315 11:29:39 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:11.315 11:29:39 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:11.315 11:29:39 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:11.315 11:29:39 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:11.315 11:29:39 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:11.315 11:29:39 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:11.315 11:29:39 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:11.315 No valid GPT data, bailing 00:03:11.315 11:29:39 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:11.315 11:29:39 -- scripts/common.sh@391 -- # pt= 00:03:11.315 11:29:39 -- scripts/common.sh@392 -- # return 1 00:03:11.315 11:29:39 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:11.573 1+0 records in 00:03:11.573 1+0 records out 00:03:11.573 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00563329 s, 186 MB/s 00:03:11.573 11:29:39 -- spdk/autotest.sh@118 -- # sync 00:03:11.573 11:29:39 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:11.573 11:29:39 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:11.573 11:29:39 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:18.136 11:29:45 -- spdk/autotest.sh@124 -- # uname -s 00:03:18.136 11:29:45 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:18.136 11:29:45 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:18.136 11:29:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.136 11:29:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.136 11:29:45 -- common/autotest_common.sh@10 -- # set +x 00:03:18.136 ************************************ 00:03:18.136 START TEST setup.sh 00:03:18.136 ************************************ 00:03:18.136 11:29:46 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:18.136 * Looking for test storage... 00:03:18.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:18.136 11:29:46 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:18.136 11:29:46 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:18.136 11:29:46 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:18.136 11:29:46 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.136 11:29:46 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.136 11:29:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:18.136 ************************************ 00:03:18.136 START TEST acl 00:03:18.136 ************************************ 00:03:18.136 11:29:46 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:18.394 * Looking for test storage... 00:03:18.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:18.394 11:29:46 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:18.394 11:29:46 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:18.394 11:29:46 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:18.395 11:29:46 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:18.395 11:29:46 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:18.395 11:29:46 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:18.395 11:29:46 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:18.395 11:29:46 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:18.395 11:29:46 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:18.395 11:29:46 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:18.395 11:29:46 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:18.395 11:29:46 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:18.395 11:29:46 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:18.395 11:29:46 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:18.395 11:29:46 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:18.395 11:29:46 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:22.581 11:29:50 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:22.581 11:29:50 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:22.581 11:29:50 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:22.581 11:29:50 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:22.581 11:29:50 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.581 11:29:50 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:25.863 Hugepages 00:03:25.863 node hugesize free / total 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.863 00:03:25.863 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:25.863 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:25.864 11:29:53 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:25.864 11:29:53 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.864 11:29:53 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.864 11:29:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:25.864 ************************************ 00:03:25.864 START TEST denied 00:03:25.864 ************************************ 00:03:25.864 11:29:53 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:25.864 11:29:53 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:03:25.864 11:29:53 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:25.864 11:29:53 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:03:25.864 11:29:53 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.864 11:29:53 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:29.150 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:03:29.150 11:29:56 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:03:29.150 11:29:56 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:29.150 11:29:56 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:29.150 11:29:56 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:03:29.150 11:29:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:03:29.150 11:29:56 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:29.150 11:29:56 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:29.150 11:29:56 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:29.150 11:29:56 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:29.150 11:29:56 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.432 00:03:34.432 real 0m8.006s 00:03:34.432 user 0m2.628s 00:03:34.432 sys 0m4.752s 00:03:34.432 11:30:01 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.432 11:30:01 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:34.432 ************************************ 00:03:34.432 END TEST denied 00:03:34.432 ************************************ 00:03:34.432 11:30:01 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:34.432 11:30:01 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:34.432 11:30:01 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.432 11:30:01 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.432 11:30:01 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:34.432 ************************************ 00:03:34.432 START TEST allowed 00:03:34.432 ************************************ 00:03:34.432 11:30:01 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:34.432 11:30:01 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:03:34.432 11:30:01 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:34.432 11:30:01 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:03:34.432 11:30:01 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.432 11:30:01 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:38.623 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:38.623 11:30:06 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:38.623 11:30:06 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:38.623 11:30:06 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:38.623 11:30:06 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.623 11:30:06 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.817 00:03:42.817 real 0m8.496s 00:03:42.817 user 0m2.384s 00:03:42.817 sys 0m4.693s 00:03:42.817 11:30:10 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.817 11:30:10 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:42.817 ************************************ 00:03:42.817 END TEST allowed 00:03:42.817 ************************************ 00:03:42.817 11:30:10 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:42.817 00:03:42.817 real 0m24.006s 00:03:42.818 user 0m7.638s 00:03:42.818 sys 0m14.594s 00:03:42.818 11:30:10 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:42.818 11:30:10 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:42.818 ************************************ 00:03:42.818 END TEST acl 00:03:42.818 ************************************ 00:03:42.818 11:30:10 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:42.818 11:30:10 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:42.818 11:30:10 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.818 11:30:10 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.818 11:30:10 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:42.818 ************************************ 00:03:42.818 START TEST hugepages 00:03:42.818 ************************************ 00:03:42.818 11:30:10 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:42.818 * Looking for test storage... 00:03:42.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 40520540 kB' 'MemAvailable: 45468576 kB' 'Buffers: 2704 kB' 'Cached: 11362836 kB' 'SwapCached: 0 kB' 'Active: 7245840 kB' 'Inactive: 4656152 kB' 'Active(anon): 6854792 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539876 kB' 'Mapped: 176856 kB' 'Shmem: 6318340 kB' 'KReclaimable: 548544 kB' 'Slab: 1190160 kB' 'SReclaimable: 548544 kB' 'SUnreclaim: 641616 kB' 'KernelStack: 22336 kB' 'PageTables: 8976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36439072 kB' 'Committed_AS: 8309888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217176 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.818 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:42.819 11:30:10 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:42.819 11:30:10 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:42.819 11:30:10 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.819 11:30:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:42.819 ************************************ 00:03:42.820 START TEST default_setup 00:03:42.820 ************************************ 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.820 11:30:10 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:46.104 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:46.104 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:46.104 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:46.104 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:46.104 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:46.104 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:46.104 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:46.104 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:46.104 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:46.104 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:46.104 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:46.104 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:46.104 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:46.104 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:46.104 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:46.104 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:47.549 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:03:47.549 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:47.549 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:47.549 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:47.549 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:47.549 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:47.549 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:47.549 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:47.549 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:47.549 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:47.549 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:47.549 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:47.549 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42712956 kB' 'MemAvailable: 47660960 kB' 'Buffers: 2704 kB' 'Cached: 11362964 kB' 'SwapCached: 0 kB' 'Active: 7264676 kB' 'Inactive: 4656152 kB' 'Active(anon): 6873628 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558156 kB' 'Mapped: 177060 kB' 'Shmem: 6318468 kB' 'KReclaimable: 548512 kB' 'Slab: 1188160 kB' 'SReclaimable: 548512 kB' 'SUnreclaim: 639648 kB' 'KernelStack: 22480 kB' 'PageTables: 9172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8327168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217096 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.550 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42714828 kB' 'MemAvailable: 47662832 kB' 'Buffers: 2704 kB' 'Cached: 11362968 kB' 'SwapCached: 0 kB' 'Active: 7263448 kB' 'Inactive: 4656152 kB' 'Active(anon): 6872400 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557344 kB' 'Mapped: 177032 kB' 'Shmem: 6318472 kB' 'KReclaimable: 548512 kB' 'Slab: 1188128 kB' 'SReclaimable: 548512 kB' 'SUnreclaim: 639616 kB' 'KernelStack: 22192 kB' 'PageTables: 8600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8327188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217112 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.551 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.552 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42715328 kB' 'MemAvailable: 47663332 kB' 'Buffers: 2704 kB' 'Cached: 11362984 kB' 'SwapCached: 0 kB' 'Active: 7263156 kB' 'Inactive: 4656152 kB' 'Active(anon): 6872108 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556976 kB' 'Mapped: 176984 kB' 'Shmem: 6318488 kB' 'KReclaimable: 548512 kB' 'Slab: 1188204 kB' 'SReclaimable: 548512 kB' 'SUnreclaim: 639692 kB' 'KernelStack: 22336 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8327208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217160 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.553 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.554 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:47.555 nr_hugepages=1024 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:47.555 resv_hugepages=0 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:47.555 surplus_hugepages=0 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:47.555 anon_hugepages=0 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42714308 kB' 'MemAvailable: 47662312 kB' 'Buffers: 2704 kB' 'Cached: 11362984 kB' 'SwapCached: 0 kB' 'Active: 7263388 kB' 'Inactive: 4656152 kB' 'Active(anon): 6872340 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557208 kB' 'Mapped: 176984 kB' 'Shmem: 6318488 kB' 'KReclaimable: 548512 kB' 'Slab: 1188204 kB' 'SReclaimable: 548512 kB' 'SUnreclaim: 639692 kB' 'KernelStack: 22352 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8327232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217272 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.555 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.556 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.556 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.556 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.556 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.556 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.556 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.556 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.556 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.816 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.816 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.816 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.816 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.816 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.816 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.816 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.816 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:47.817 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 25611204 kB' 'MemUsed: 7027936 kB' 'SwapCached: 0 kB' 'Active: 2994232 kB' 'Inactive: 622632 kB' 'Active(anon): 2690840 kB' 'Inactive(anon): 0 kB' 'Active(file): 303392 kB' 'Inactive(file): 622632 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3188128 kB' 'Mapped: 132032 kB' 'AnonPages: 431904 kB' 'Shmem: 2262104 kB' 'KernelStack: 13320 kB' 'PageTables: 6412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 350080 kB' 'Slab: 665652 kB' 'SReclaimable: 350080 kB' 'SUnreclaim: 315572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.818 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:47.819 node0=1024 expecting 1024 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:47.819 00:03:47.819 real 0m5.283s 00:03:47.819 user 0m1.392s 00:03:47.819 sys 0m2.424s 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.819 11:30:15 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:47.819 ************************************ 00:03:47.819 END TEST default_setup 00:03:47.819 ************************************ 00:03:47.819 11:30:15 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:47.819 11:30:15 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:47.819 11:30:15 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.819 11:30:15 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.819 11:30:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:47.819 ************************************ 00:03:47.819 START TEST per_node_1G_alloc 00:03:47.819 ************************************ 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.819 11:30:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:51.117 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:51.117 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:51.117 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:51.117 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:51.117 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:51.117 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:51.117 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:51.117 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:51.117 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:51.117 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:51.117 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:51.117 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:51.117 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:51.117 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:51.117 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:51.117 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:51.117 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42724280 kB' 'MemAvailable: 47672284 kB' 'Buffers: 2704 kB' 'Cached: 11363112 kB' 'SwapCached: 0 kB' 'Active: 7263932 kB' 'Inactive: 4656152 kB' 'Active(anon): 6872884 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557464 kB' 'Mapped: 176992 kB' 'Shmem: 6318616 kB' 'KReclaimable: 548512 kB' 'Slab: 1187876 kB' 'SReclaimable: 548512 kB' 'SUnreclaim: 639364 kB' 'KernelStack: 22464 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8325160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217416 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.117 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.118 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42725116 kB' 'MemAvailable: 47673120 kB' 'Buffers: 2704 kB' 'Cached: 11363116 kB' 'SwapCached: 0 kB' 'Active: 7263704 kB' 'Inactive: 4656152 kB' 'Active(anon): 6872656 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557284 kB' 'Mapped: 176980 kB' 'Shmem: 6318620 kB' 'KReclaimable: 548512 kB' 'Slab: 1187960 kB' 'SReclaimable: 548512 kB' 'SUnreclaim: 639448 kB' 'KernelStack: 22272 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8325380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217272 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.119 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.120 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42725312 kB' 'MemAvailable: 47673316 kB' 'Buffers: 2704 kB' 'Cached: 11363132 kB' 'SwapCached: 0 kB' 'Active: 7263972 kB' 'Inactive: 4656152 kB' 'Active(anon): 6872924 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557568 kB' 'Mapped: 176980 kB' 'Shmem: 6318636 kB' 'KReclaimable: 548512 kB' 'Slab: 1187960 kB' 'SReclaimable: 548512 kB' 'SUnreclaim: 639448 kB' 'KernelStack: 22304 kB' 'PageTables: 8616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8326284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217256 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.121 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.122 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:51.123 nr_hugepages=1024 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:51.123 resv_hugepages=0 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:51.123 surplus_hugepages=0 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:51.123 anon_hugepages=0 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42725680 kB' 'MemAvailable: 47673684 kB' 'Buffers: 2704 kB' 'Cached: 11363156 kB' 'SwapCached: 0 kB' 'Active: 7263736 kB' 'Inactive: 4656152 kB' 'Active(anon): 6872688 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557288 kB' 'Mapped: 176980 kB' 'Shmem: 6318660 kB' 'KReclaimable: 548512 kB' 'Slab: 1187952 kB' 'SReclaimable: 548512 kB' 'SUnreclaim: 639440 kB' 'KernelStack: 22272 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8325424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217224 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.123 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:51.124 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26684080 kB' 'MemUsed: 5955060 kB' 'SwapCached: 0 kB' 'Active: 2994816 kB' 'Inactive: 622632 kB' 'Active(anon): 2691424 kB' 'Inactive(anon): 0 kB' 'Active(file): 303392 kB' 'Inactive(file): 622632 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3188144 kB' 'Mapped: 132040 kB' 'AnonPages: 432408 kB' 'Shmem: 2262120 kB' 'KernelStack: 13112 kB' 'PageTables: 5868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 350080 kB' 'Slab: 665484 kB' 'SReclaimable: 350080 kB' 'SUnreclaim: 315404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.125 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656100 kB' 'MemFree: 16041892 kB' 'MemUsed: 11614208 kB' 'SwapCached: 0 kB' 'Active: 4270468 kB' 'Inactive: 4033520 kB' 'Active(anon): 4182812 kB' 'Inactive(anon): 0 kB' 'Active(file): 87656 kB' 'Inactive(file): 4033520 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8177760 kB' 'Mapped: 45444 kB' 'AnonPages: 126364 kB' 'Shmem: 4056584 kB' 'KernelStack: 9144 kB' 'PageTables: 2592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 198432 kB' 'Slab: 522468 kB' 'SReclaimable: 198432 kB' 'SUnreclaim: 324036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.126 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.127 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.127 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.127 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.127 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.127 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.127 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.127 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.127 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.127 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.127 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.127 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.127 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.127 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.127 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.127 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.127 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.127 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.127 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.386 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.386 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:51.387 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.388 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.388 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.388 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:51.388 node0=512 expecting 512 00:03:51.388 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:51.388 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:51.388 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:51.388 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:51.388 node1=512 expecting 512 00:03:51.388 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:51.388 00:03:51.388 real 0m3.468s 00:03:51.388 user 0m1.321s 00:03:51.388 sys 0m2.192s 00:03:51.388 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:51.388 11:30:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:51.388 ************************************ 00:03:51.388 END TEST per_node_1G_alloc 00:03:51.388 ************************************ 00:03:51.388 11:30:19 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:51.388 11:30:19 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:51.388 11:30:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.388 11:30:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.388 11:30:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:51.388 ************************************ 00:03:51.388 START TEST even_2G_alloc 00:03:51.388 ************************************ 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.388 11:30:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:53.925 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:53.925 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:53.925 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:53.925 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:53.925 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:53.925 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:53.925 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:54.188 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:54.188 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:54.188 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:54.188 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:54.188 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:54.188 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:54.188 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:54.188 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:54.188 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:54.188 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42722824 kB' 'MemAvailable: 47670820 kB' 'Buffers: 2704 kB' 'Cached: 11363272 kB' 'SwapCached: 0 kB' 'Active: 7262280 kB' 'Inactive: 4656152 kB' 'Active(anon): 6871232 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555840 kB' 'Mapped: 175968 kB' 'Shmem: 6318776 kB' 'KReclaimable: 548504 kB' 'Slab: 1187928 kB' 'SReclaimable: 548504 kB' 'SUnreclaim: 639424 kB' 'KernelStack: 22304 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8319512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217304 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.188 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.189 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42724300 kB' 'MemAvailable: 47672296 kB' 'Buffers: 2704 kB' 'Cached: 11363276 kB' 'SwapCached: 0 kB' 'Active: 7261628 kB' 'Inactive: 4656152 kB' 'Active(anon): 6870580 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555184 kB' 'Mapped: 175896 kB' 'Shmem: 6318780 kB' 'KReclaimable: 548504 kB' 'Slab: 1187960 kB' 'SReclaimable: 548504 kB' 'SUnreclaim: 639456 kB' 'KernelStack: 22240 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8319728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217224 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.190 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.191 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.192 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42724300 kB' 'MemAvailable: 47672296 kB' 'Buffers: 2704 kB' 'Cached: 11363292 kB' 'SwapCached: 0 kB' 'Active: 7261652 kB' 'Inactive: 4656152 kB' 'Active(anon): 6870604 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555188 kB' 'Mapped: 175896 kB' 'Shmem: 6318796 kB' 'KReclaimable: 548504 kB' 'Slab: 1187960 kB' 'SReclaimable: 548504 kB' 'SUnreclaim: 639456 kB' 'KernelStack: 22240 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8319748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217224 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.193 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.194 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:54.195 nr_hugepages=1024 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.195 resv_hugepages=0 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.195 surplus_hugepages=0 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.195 anon_hugepages=0 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.195 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42724300 kB' 'MemAvailable: 47672296 kB' 'Buffers: 2704 kB' 'Cached: 11363292 kB' 'SwapCached: 0 kB' 'Active: 7261688 kB' 'Inactive: 4656152 kB' 'Active(anon): 6870640 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555220 kB' 'Mapped: 175896 kB' 'Shmem: 6318796 kB' 'KReclaimable: 548504 kB' 'Slab: 1187960 kB' 'SReclaimable: 548504 kB' 'SUnreclaim: 639456 kB' 'KernelStack: 22256 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8319772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217224 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.196 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.197 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26683100 kB' 'MemUsed: 5956040 kB' 'SwapCached: 0 kB' 'Active: 2992180 kB' 'Inactive: 622632 kB' 'Active(anon): 2688788 kB' 'Inactive(anon): 0 kB' 'Active(file): 303392 kB' 'Inactive(file): 622632 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3188144 kB' 'Mapped: 131036 kB' 'AnonPages: 429860 kB' 'Shmem: 2262120 kB' 'KernelStack: 13112 kB' 'PageTables: 5848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 350080 kB' 'Slab: 665640 kB' 'SReclaimable: 350080 kB' 'SUnreclaim: 315560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.198 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.458 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656100 kB' 'MemFree: 16041452 kB' 'MemUsed: 11614648 kB' 'SwapCached: 0 kB' 'Active: 4269864 kB' 'Inactive: 4033520 kB' 'Active(anon): 4182208 kB' 'Inactive(anon): 0 kB' 'Active(file): 87656 kB' 'Inactive(file): 4033520 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8177916 kB' 'Mapped: 44860 kB' 'AnonPages: 125664 kB' 'Shmem: 4056740 kB' 'KernelStack: 9144 kB' 'PageTables: 2564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 198424 kB' 'Slab: 522320 kB' 'SReclaimable: 198424 kB' 'SUnreclaim: 323896 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.459 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:54.460 node0=512 expecting 512 00:03:54.460 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.461 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.461 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.461 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:54.461 node1=512 expecting 512 00:03:54.461 11:30:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:54.461 00:03:54.461 real 0m3.006s 00:03:54.461 user 0m1.006s 00:03:54.461 sys 0m1.899s 00:03:54.461 11:30:22 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:54.461 11:30:22 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:54.461 ************************************ 00:03:54.461 END TEST even_2G_alloc 00:03:54.461 ************************************ 00:03:54.461 11:30:22 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:54.461 11:30:22 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:54.461 11:30:22 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.461 11:30:22 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.461 11:30:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.461 ************************************ 00:03:54.461 START TEST odd_alloc 00:03:54.461 ************************************ 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.461 11:30:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:57.751 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.751 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.751 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.751 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.751 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.751 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.751 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.751 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.751 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:57.751 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:57.751 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:57.751 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:57.751 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:57.751 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:57.751 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:57.751 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:57.751 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42712432 kB' 'MemAvailable: 47660428 kB' 'Buffers: 2704 kB' 'Cached: 11363424 kB' 'SwapCached: 0 kB' 'Active: 7263688 kB' 'Inactive: 4656152 kB' 'Active(anon): 6872640 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556456 kB' 'Mapped: 176020 kB' 'Shmem: 6318928 kB' 'KReclaimable: 548504 kB' 'Slab: 1188092 kB' 'SReclaimable: 548504 kB' 'SUnreclaim: 639588 kB' 'KernelStack: 22272 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486624 kB' 'Committed_AS: 8320404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217320 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.751 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42712964 kB' 'MemAvailable: 47660960 kB' 'Buffers: 2704 kB' 'Cached: 11363436 kB' 'SwapCached: 0 kB' 'Active: 7262908 kB' 'Inactive: 4656152 kB' 'Active(anon): 6871860 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555732 kB' 'Mapped: 175984 kB' 'Shmem: 6318940 kB' 'KReclaimable: 548504 kB' 'Slab: 1188056 kB' 'SReclaimable: 548504 kB' 'SUnreclaim: 639552 kB' 'KernelStack: 22240 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486624 kB' 'Committed_AS: 8320296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217288 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.752 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.753 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42713884 kB' 'MemAvailable: 47661880 kB' 'Buffers: 2704 kB' 'Cached: 11363444 kB' 'SwapCached: 0 kB' 'Active: 7262296 kB' 'Inactive: 4656152 kB' 'Active(anon): 6871248 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555576 kB' 'Mapped: 175904 kB' 'Shmem: 6318948 kB' 'KReclaimable: 548504 kB' 'Slab: 1188028 kB' 'SReclaimable: 548504 kB' 'SUnreclaim: 639524 kB' 'KernelStack: 22208 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486624 kB' 'Committed_AS: 8320076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217256 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.754 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.755 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:57.756 nr_hugepages=1025 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:57.756 resv_hugepages=0 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:57.756 surplus_hugepages=0 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:57.756 anon_hugepages=0 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42714840 kB' 'MemAvailable: 47662836 kB' 'Buffers: 2704 kB' 'Cached: 11363464 kB' 'SwapCached: 0 kB' 'Active: 7262292 kB' 'Inactive: 4656152 kB' 'Active(anon): 6871244 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555520 kB' 'Mapped: 175904 kB' 'Shmem: 6318968 kB' 'KReclaimable: 548504 kB' 'Slab: 1188028 kB' 'SReclaimable: 548504 kB' 'SUnreclaim: 639524 kB' 'KernelStack: 22224 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37486624 kB' 'Committed_AS: 8320228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217256 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.756 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.757 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26692808 kB' 'MemUsed: 5946332 kB' 'SwapCached: 0 kB' 'Active: 2992140 kB' 'Inactive: 622632 kB' 'Active(anon): 2688748 kB' 'Inactive(anon): 0 kB' 'Active(file): 303392 kB' 'Inactive(file): 622632 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3188148 kB' 'Mapped: 131044 kB' 'AnonPages: 429760 kB' 'Shmem: 2262124 kB' 'KernelStack: 13112 kB' 'PageTables: 5848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 350080 kB' 'Slab: 665408 kB' 'SReclaimable: 350080 kB' 'SUnreclaim: 315328 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.758 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656100 kB' 'MemFree: 16021876 kB' 'MemUsed: 11634224 kB' 'SwapCached: 0 kB' 'Active: 4270404 kB' 'Inactive: 4033520 kB' 'Active(anon): 4182748 kB' 'Inactive(anon): 0 kB' 'Active(file): 87656 kB' 'Inactive(file): 4033520 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8178076 kB' 'Mapped: 44860 kB' 'AnonPages: 125956 kB' 'Shmem: 4056900 kB' 'KernelStack: 9112 kB' 'PageTables: 2488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 198424 kB' 'Slab: 522620 kB' 'SReclaimable: 198424 kB' 'SUnreclaim: 324196 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.759 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:57.760 node0=512 expecting 513 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:57.760 node1=513 expecting 512 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:57.760 00:03:57.760 real 0m3.203s 00:03:57.760 user 0m1.123s 00:03:57.760 sys 0m2.094s 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.760 11:30:25 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:57.760 ************************************ 00:03:57.760 END TEST odd_alloc 00:03:57.760 ************************************ 00:03:57.760 11:30:25 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:57.761 11:30:25 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:57.761 11:30:25 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.761 11:30:25 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.761 11:30:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:57.761 ************************************ 00:03:57.761 START TEST custom_alloc 00:03:57.761 ************************************ 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.761 11:30:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:01.057 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:01.057 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:01.057 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:01.057 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:01.057 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:01.057 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:01.057 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:01.057 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:01.057 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:01.057 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:01.057 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:01.057 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:01.057 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:01.057 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:01.057 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:01.057 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:01.057 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 41674560 kB' 'MemAvailable: 46622556 kB' 'Buffers: 2704 kB' 'Cached: 11363604 kB' 'SwapCached: 0 kB' 'Active: 7263972 kB' 'Inactive: 4656152 kB' 'Active(anon): 6872924 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556676 kB' 'Mapped: 176032 kB' 'Shmem: 6319108 kB' 'KReclaimable: 548504 kB' 'Slab: 1188120 kB' 'SReclaimable: 548504 kB' 'SUnreclaim: 639616 kB' 'KernelStack: 22256 kB' 'PageTables: 8412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963360 kB' 'Committed_AS: 8321228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217288 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.057 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.058 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 41674920 kB' 'MemAvailable: 46622916 kB' 'Buffers: 2704 kB' 'Cached: 11363608 kB' 'SwapCached: 0 kB' 'Active: 7263144 kB' 'Inactive: 4656152 kB' 'Active(anon): 6872096 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556328 kB' 'Mapped: 175920 kB' 'Shmem: 6319112 kB' 'KReclaimable: 548504 kB' 'Slab: 1188144 kB' 'SReclaimable: 548504 kB' 'SUnreclaim: 639640 kB' 'KernelStack: 22240 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963360 kB' 'Committed_AS: 8321248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217272 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.059 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.060 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.061 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 41675784 kB' 'MemAvailable: 46623780 kB' 'Buffers: 2704 kB' 'Cached: 11363624 kB' 'SwapCached: 0 kB' 'Active: 7263160 kB' 'Inactive: 4656152 kB' 'Active(anon): 6872112 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556328 kB' 'Mapped: 175920 kB' 'Shmem: 6319128 kB' 'KReclaimable: 548504 kB' 'Slab: 1188144 kB' 'SReclaimable: 548504 kB' 'SUnreclaim: 639640 kB' 'KernelStack: 22240 kB' 'PageTables: 8360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963360 kB' 'Committed_AS: 8321268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217288 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.326 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:01.327 nr_hugepages=1536 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.327 resv_hugepages=0 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.327 surplus_hugepages=0 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.327 anon_hugepages=0 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.327 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 41679704 kB' 'MemAvailable: 46627700 kB' 'Buffers: 2704 kB' 'Cached: 11363644 kB' 'SwapCached: 0 kB' 'Active: 7262788 kB' 'Inactive: 4656152 kB' 'Active(anon): 6871740 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555920 kB' 'Mapped: 175920 kB' 'Shmem: 6319148 kB' 'KReclaimable: 548504 kB' 'Slab: 1188144 kB' 'SReclaimable: 548504 kB' 'SUnreclaim: 639640 kB' 'KernelStack: 22224 kB' 'PageTables: 8308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36963360 kB' 'Committed_AS: 8322688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217320 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.328 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 26686920 kB' 'MemUsed: 5952220 kB' 'SwapCached: 0 kB' 'Active: 2992948 kB' 'Inactive: 622632 kB' 'Active(anon): 2689556 kB' 'Inactive(anon): 0 kB' 'Active(file): 303392 kB' 'Inactive(file): 622632 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3188168 kB' 'Mapped: 131060 kB' 'AnonPages: 430688 kB' 'Shmem: 2262144 kB' 'KernelStack: 13192 kB' 'PageTables: 5736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 350080 kB' 'Slab: 665448 kB' 'SReclaimable: 350080 kB' 'SUnreclaim: 315368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.329 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.330 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27656100 kB' 'MemFree: 14991788 kB' 'MemUsed: 12664312 kB' 'SwapCached: 0 kB' 'Active: 4270044 kB' 'Inactive: 4033520 kB' 'Active(anon): 4182388 kB' 'Inactive(anon): 0 kB' 'Active(file): 87656 kB' 'Inactive(file): 4033520 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8178224 kB' 'Mapped: 44860 kB' 'AnonPages: 125416 kB' 'Shmem: 4057048 kB' 'KernelStack: 9096 kB' 'PageTables: 2368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 198424 kB' 'Slab: 522696 kB' 'SReclaimable: 198424 kB' 'SUnreclaim: 324272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.331 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:01.332 node0=512 expecting 512 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:01.332 node1=1024 expecting 1024 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:01.332 00:04:01.332 real 0m3.594s 00:04:01.332 user 0m1.383s 00:04:01.332 sys 0m2.254s 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.332 11:30:29 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:01.332 ************************************ 00:04:01.332 END TEST custom_alloc 00:04:01.332 ************************************ 00:04:01.332 11:30:29 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:01.332 11:30:29 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:01.332 11:30:29 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.332 11:30:29 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.332 11:30:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.332 ************************************ 00:04:01.332 START TEST no_shrink_alloc 00:04:01.332 ************************************ 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.332 11:30:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.623 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:04.623 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:04.623 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:04.623 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:04.623 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:04.623 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:04.623 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:04.623 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:04.623 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:04.623 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:04.623 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:04.623 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:04.623 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:04.624 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:04.624 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:04.624 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:04.624 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42725028 kB' 'MemAvailable: 47673024 kB' 'Buffers: 2704 kB' 'Cached: 11363764 kB' 'SwapCached: 0 kB' 'Active: 7264940 kB' 'Inactive: 4656152 kB' 'Active(anon): 6873892 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557980 kB' 'Mapped: 175944 kB' 'Shmem: 6319268 kB' 'KReclaimable: 548504 kB' 'Slab: 1187776 kB' 'SReclaimable: 548504 kB' 'SUnreclaim: 639272 kB' 'KernelStack: 22512 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8324928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217592 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.624 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42727420 kB' 'MemAvailable: 47675416 kB' 'Buffers: 2704 kB' 'Cached: 11363772 kB' 'SwapCached: 0 kB' 'Active: 7265108 kB' 'Inactive: 4656152 kB' 'Active(anon): 6874060 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558064 kB' 'Mapped: 175944 kB' 'Shmem: 6319276 kB' 'KReclaimable: 548504 kB' 'Slab: 1187764 kB' 'SReclaimable: 548504 kB' 'SUnreclaim: 639260 kB' 'KernelStack: 22288 kB' 'PageTables: 8736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8325180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217432 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.625 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.889 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.890 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42727516 kB' 'MemAvailable: 47675512 kB' 'Buffers: 2704 kB' 'Cached: 11363788 kB' 'SwapCached: 0 kB' 'Active: 7265252 kB' 'Inactive: 4656152 kB' 'Active(anon): 6874204 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558160 kB' 'Mapped: 175944 kB' 'Shmem: 6319292 kB' 'KReclaimable: 548504 kB' 'Slab: 1187804 kB' 'SReclaimable: 548504 kB' 'SUnreclaim: 639300 kB' 'KernelStack: 22416 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8323596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217400 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.891 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.892 nr_hugepages=1024 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.892 resv_hugepages=0 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.892 surplus_hugepages=0 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.892 anon_hugepages=0 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42727576 kB' 'MemAvailable: 47675572 kB' 'Buffers: 2704 kB' 'Cached: 11363788 kB' 'SwapCached: 0 kB' 'Active: 7264968 kB' 'Inactive: 4656152 kB' 'Active(anon): 6873920 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557844 kB' 'Mapped: 175944 kB' 'Shmem: 6319292 kB' 'KReclaimable: 548504 kB' 'Slab: 1187804 kB' 'SReclaimable: 548504 kB' 'SUnreclaim: 639300 kB' 'KernelStack: 22272 kB' 'PageTables: 8392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8325228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217448 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.892 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.893 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.894 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.894 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.894 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.894 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.894 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.894 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.894 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.894 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.894 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.894 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.894 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.894 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.894 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.894 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.894 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.895 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 25624688 kB' 'MemUsed: 7014452 kB' 'SwapCached: 0 kB' 'Active: 2993104 kB' 'Inactive: 622632 kB' 'Active(anon): 2689712 kB' 'Inactive(anon): 0 kB' 'Active(file): 303392 kB' 'Inactive(file): 622632 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3188176 kB' 'Mapped: 131072 kB' 'AnonPages: 430668 kB' 'Shmem: 2262152 kB' 'KernelStack: 13128 kB' 'PageTables: 5656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 350080 kB' 'Slab: 665260 kB' 'SReclaimable: 350080 kB' 'SUnreclaim: 315180 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.896 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:04.897 node0=1024 expecting 1024 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.897 11:30:32 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:08.191 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:08.191 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:08.191 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:08.191 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:08.191 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:08.191 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:08.191 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:08.191 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:08.191 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:08.191 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:08.191 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:08.191 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:08.191 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:08.191 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:08.191 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:08.191 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:08.191 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:08.191 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:08.191 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:08.191 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:08.191 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.191 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.191 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:08.191 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:08.191 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:08.191 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.191 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.191 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.191 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:08.191 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.191 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.191 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.191 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.191 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.191 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.191 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.191 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.191 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42731952 kB' 'MemAvailable: 47679948 kB' 'Buffers: 2704 kB' 'Cached: 11363892 kB' 'SwapCached: 0 kB' 'Active: 7265368 kB' 'Inactive: 4656152 kB' 'Active(anon): 6874320 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558228 kB' 'Mapped: 175960 kB' 'Shmem: 6319396 kB' 'KReclaimable: 548504 kB' 'Slab: 1187144 kB' 'SReclaimable: 548504 kB' 'SUnreclaim: 638640 kB' 'KernelStack: 22400 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8325748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217448 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.192 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42732088 kB' 'MemAvailable: 47680084 kB' 'Buffers: 2704 kB' 'Cached: 11363896 kB' 'SwapCached: 0 kB' 'Active: 7266380 kB' 'Inactive: 4656152 kB' 'Active(anon): 6875332 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559148 kB' 'Mapped: 175952 kB' 'Shmem: 6319400 kB' 'KReclaimable: 548504 kB' 'Slab: 1187144 kB' 'SReclaimable: 548504 kB' 'SUnreclaim: 638640 kB' 'KernelStack: 22320 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8342416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217480 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.193 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.194 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42732780 kB' 'MemAvailable: 47680776 kB' 'Buffers: 2704 kB' 'Cached: 11363912 kB' 'SwapCached: 0 kB' 'Active: 7266448 kB' 'Inactive: 4656152 kB' 'Active(anon): 6875400 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559236 kB' 'Mapped: 175952 kB' 'Shmem: 6319416 kB' 'KReclaimable: 548504 kB' 'Slab: 1187284 kB' 'SReclaimable: 548504 kB' 'SUnreclaim: 638780 kB' 'KernelStack: 22624 kB' 'PageTables: 8996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8325420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217512 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.195 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.196 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:08.197 nr_hugepages=1024 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.197 resv_hugepages=0 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.197 surplus_hugepages=0 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.197 anon_hugepages=0 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60295240 kB' 'MemFree: 42732460 kB' 'MemAvailable: 47680456 kB' 'Buffers: 2704 kB' 'Cached: 11363936 kB' 'SwapCached: 0 kB' 'Active: 7266180 kB' 'Inactive: 4656152 kB' 'Active(anon): 6875132 kB' 'Inactive(anon): 0 kB' 'Active(file): 391048 kB' 'Inactive(file): 4656152 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558896 kB' 'Mapped: 175952 kB' 'Shmem: 6319440 kB' 'KReclaimable: 548504 kB' 'Slab: 1187180 kB' 'SReclaimable: 548504 kB' 'SUnreclaim: 638676 kB' 'KernelStack: 22624 kB' 'PageTables: 9372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37487648 kB' 'Committed_AS: 8325572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217464 kB' 'VmallocChunk: 0 kB' 'Percpu: 131264 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3149172 kB' 'DirectMap2M: 23799808 kB' 'DirectMap1G: 41943040 kB' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.197 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.198 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32639140 kB' 'MemFree: 25618584 kB' 'MemUsed: 7020556 kB' 'SwapCached: 0 kB' 'Active: 2994700 kB' 'Inactive: 622632 kB' 'Active(anon): 2691308 kB' 'Inactive(anon): 0 kB' 'Active(file): 303392 kB' 'Inactive(file): 622632 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3188220 kB' 'Mapped: 131080 kB' 'AnonPages: 432288 kB' 'Shmem: 2262196 kB' 'KernelStack: 13560 kB' 'PageTables: 6960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 350080 kB' 'Slab: 664660 kB' 'SReclaimable: 350080 kB' 'SUnreclaim: 314580 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.199 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:08.200 node0=1024 expecting 1024 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:08.200 00:04:08.200 real 0m6.552s 00:04:08.200 user 0m2.320s 00:04:08.200 sys 0m4.298s 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.200 11:30:35 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:08.200 ************************************ 00:04:08.200 END TEST no_shrink_alloc 00:04:08.200 ************************************ 00:04:08.200 11:30:35 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:08.200 11:30:35 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:08.200 11:30:35 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:08.200 11:30:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:08.200 11:30:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.200 11:30:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.200 11:30:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.200 11:30:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.200 11:30:35 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:08.200 11:30:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.200 11:30:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.200 11:30:35 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:08.200 11:30:35 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:08.200 11:30:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:08.200 11:30:35 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:08.200 00:04:08.200 real 0m25.743s 00:04:08.200 user 0m8.813s 00:04:08.200 sys 0m15.583s 00:04:08.200 11:30:35 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.200 11:30:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:08.200 ************************************ 00:04:08.200 END TEST hugepages 00:04:08.200 ************************************ 00:04:08.200 11:30:36 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:08.201 11:30:36 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:08.201 11:30:36 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.201 11:30:36 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.201 11:30:36 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:08.201 ************************************ 00:04:08.201 START TEST driver 00:04:08.201 ************************************ 00:04:08.201 11:30:36 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:08.201 * Looking for test storage... 00:04:08.201 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:08.201 11:30:36 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:08.201 11:30:36 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.201 11:30:36 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:13.467 11:30:40 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:13.467 11:30:40 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.467 11:30:40 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.467 11:30:40 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:13.467 ************************************ 00:04:13.467 START TEST guess_driver 00:04:13.467 ************************************ 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:13.467 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:13.467 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:13.467 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:13.467 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:13.467 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:13.467 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:13.467 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:13.467 Looking for driver=vfio-pci 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.467 11:30:40 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.994 11:30:43 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.900 11:30:45 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:17.900 11:30:45 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:17.900 11:30:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:17.900 11:30:45 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:17.900 11:30:45 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:17.900 11:30:45 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:17.900 11:30:45 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:22.154 00:04:22.154 real 0m9.519s 00:04:22.154 user 0m2.391s 00:04:22.154 sys 0m4.824s 00:04:22.154 11:30:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.154 11:30:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:22.154 ************************************ 00:04:22.154 END TEST guess_driver 00:04:22.154 ************************************ 00:04:22.154 11:30:50 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:22.154 00:04:22.154 real 0m14.120s 00:04:22.154 user 0m3.572s 00:04:22.154 sys 0m7.421s 00:04:22.154 11:30:50 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.154 11:30:50 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:22.154 ************************************ 00:04:22.154 END TEST driver 00:04:22.154 ************************************ 00:04:22.154 11:30:50 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:22.154 11:30:50 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:22.154 11:30:50 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:22.154 11:30:50 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.154 11:30:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:22.436 ************************************ 00:04:22.436 START TEST devices 00:04:22.436 ************************************ 00:04:22.436 11:30:50 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:22.436 * Looking for test storage... 00:04:22.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:22.436 11:30:50 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:22.436 11:30:50 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:22.436 11:30:50 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.436 11:30:50 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:25.721 11:30:53 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:25.721 11:30:53 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:25.721 11:30:53 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:25.721 11:30:53 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:25.721 11:30:53 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:25.721 11:30:53 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:25.721 11:30:53 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:25.721 11:30:53 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:25.721 11:30:53 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:25.721 11:30:53 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:25.721 11:30:53 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:25.721 11:30:53 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:25.721 11:30:53 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:25.721 11:30:53 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:25.721 11:30:53 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:25.721 11:30:53 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:25.721 11:30:53 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:25.721 11:30:53 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:04:25.721 11:30:53 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:25.721 11:30:53 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:25.721 11:30:53 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:25.721 11:30:53 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:25.721 No valid GPT data, bailing 00:04:25.721 11:30:53 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:25.721 11:30:53 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:25.721 11:30:53 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:25.721 11:30:53 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:25.721 11:30:53 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:25.721 11:30:53 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:25.721 11:30:53 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:25.721 11:30:53 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:25.721 11:30:53 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:25.721 11:30:53 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:04:25.721 11:30:53 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:25.721 11:30:53 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:25.721 11:30:53 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:25.721 11:30:53 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.721 11:30:53 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.721 11:30:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:25.721 ************************************ 00:04:25.721 START TEST nvme_mount 00:04:25.721 ************************************ 00:04:25.721 11:30:53 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:25.721 11:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:25.721 11:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:25.721 11:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:25.721 11:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:25.721 11:30:53 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:25.721 11:30:53 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:25.721 11:30:53 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:25.721 11:30:53 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:25.721 11:30:53 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:25.721 11:30:53 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:25.721 11:30:53 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:25.721 11:30:53 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:25.721 11:30:53 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.721 11:30:53 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:25.721 11:30:53 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:25.721 11:30:53 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:25.721 11:30:53 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:25.721 11:30:53 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:25.721 11:30:53 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:26.657 Creating new GPT entries in memory. 00:04:26.657 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:26.657 other utilities. 00:04:26.657 11:30:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:26.657 11:30:54 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.657 11:30:54 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:26.657 11:30:54 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:26.657 11:30:54 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:28.034 Creating new GPT entries in memory. 00:04:28.034 The operation has completed successfully. 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1760480 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.034 11:30:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:31.322 11:30:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.322 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.322 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:31.322 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.322 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.322 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.322 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:31.322 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.322 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.322 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:31.322 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:31.322 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:31.322 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:31.322 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:31.322 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:31.322 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:31.322 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:31.322 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:31.322 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:31.322 11:30:59 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:31.322 11:30:59 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.322 11:30:59 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:31.322 11:30:59 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:31.322 11:30:59 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.580 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.580 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:31.580 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:31.580 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.580 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.580 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:31.580 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.580 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:31.580 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:31.580 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.580 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:31.580 11:30:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:31.580 11:30:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.580 11:30:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:34.109 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.109 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.109 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.109 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.109 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.109 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.109 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.109 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.109 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.109 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.109 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.109 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.109 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.109 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.109 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.110 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.110 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.110 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.110 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.110 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.110 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.110 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.110 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.110 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.110 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.110 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.110 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.110 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.110 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.110 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.110 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.110 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.368 11:31:02 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:37.645 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:37.645 00:04:37.645 real 0m11.979s 00:04:37.645 user 0m3.381s 00:04:37.645 sys 0m6.481s 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.645 11:31:05 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:37.645 ************************************ 00:04:37.645 END TEST nvme_mount 00:04:37.645 ************************************ 00:04:37.903 11:31:05 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:37.903 11:31:05 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:37.903 11:31:05 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.903 11:31:05 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.903 11:31:05 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:37.903 ************************************ 00:04:37.903 START TEST dm_mount 00:04:37.903 ************************************ 00:04:37.903 11:31:05 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:37.903 11:31:05 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:37.903 11:31:05 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:37.903 11:31:05 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:37.903 11:31:05 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:37.903 11:31:05 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:37.903 11:31:05 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:37.903 11:31:05 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:37.903 11:31:05 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:37.903 11:31:05 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:37.903 11:31:05 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:37.903 11:31:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:37.903 11:31:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.903 11:31:05 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:37.903 11:31:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:37.903 11:31:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.903 11:31:05 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:37.903 11:31:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:37.903 11:31:05 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.903 11:31:05 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:37.903 11:31:05 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:37.903 11:31:05 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:38.837 Creating new GPT entries in memory. 00:04:38.837 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:38.837 other utilities. 00:04:38.837 11:31:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:38.837 11:31:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.837 11:31:06 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:38.837 11:31:06 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:38.837 11:31:06 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:39.772 Creating new GPT entries in memory. 00:04:39.772 The operation has completed successfully. 00:04:39.772 11:31:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:39.772 11:31:07 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.772 11:31:07 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:39.772 11:31:07 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:39.772 11:31:07 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:41.150 The operation has completed successfully. 00:04:41.150 11:31:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:41.150 11:31:08 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.150 11:31:08 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1764707 00:04:41.150 11:31:08 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:41.150 11:31:08 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.150 11:31:08 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.150 11:31:08 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:41.150 11:31:08 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:41.150 11:31:08 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.150 11:31:08 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:41.150 11:31:08 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.150 11:31:08 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:41.150 11:31:08 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:41.150 11:31:08 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:41.150 11:31:08 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:41.150 11:31:08 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:41.150 11:31:08 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.150 11:31:08 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:41.150 11:31:08 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.150 11:31:08 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:41.150 11:31:08 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:41.150 11:31:08 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.150 11:31:09 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.150 11:31:09 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:41.150 11:31:09 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:41.150 11:31:09 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:41.150 11:31:09 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:41.150 11:31:09 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:41.150 11:31:09 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:41.150 11:31:09 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:41.150 11:31:09 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:41.150 11:31:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:41.151 11:31:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:41.151 11:31:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:41.151 11:31:09 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.151 11:31:09 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.437 11:31:12 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:47.723 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:48.002 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:48.002 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:48.002 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:48.002 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:48.002 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:48.002 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:48.002 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:48.002 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.002 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:48.002 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:48.002 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:48.002 11:31:15 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:48.002 00:04:48.002 real 0m10.137s 00:04:48.002 user 0m2.588s 00:04:48.002 sys 0m4.662s 00:04:48.002 11:31:15 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.002 11:31:15 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:48.002 ************************************ 00:04:48.002 END TEST dm_mount 00:04:48.002 ************************************ 00:04:48.002 11:31:15 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:48.002 11:31:15 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:48.002 11:31:15 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:48.002 11:31:15 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:48.002 11:31:15 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.002 11:31:15 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:48.002 11:31:16 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:48.002 11:31:16 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:48.269 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:48.269 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:48.269 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:48.269 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:48.269 11:31:16 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:48.269 11:31:16 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:48.269 11:31:16 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:48.269 11:31:16 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:48.269 11:31:16 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:48.269 11:31:16 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:48.269 11:31:16 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:48.269 00:04:48.269 real 0m26.041s 00:04:48.269 user 0m7.205s 00:04:48.269 sys 0m13.658s 00:04:48.269 11:31:16 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.269 11:31:16 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:48.269 ************************************ 00:04:48.269 END TEST devices 00:04:48.269 ************************************ 00:04:48.269 11:31:16 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:48.269 00:04:48.269 real 1m30.305s 00:04:48.269 user 0m27.361s 00:04:48.269 sys 0m51.548s 00:04:48.269 11:31:16 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.269 11:31:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:48.269 ************************************ 00:04:48.269 END TEST setup.sh 00:04:48.269 ************************************ 00:04:48.528 11:31:16 -- common/autotest_common.sh@1142 -- # return 0 00:04:48.528 11:31:16 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:51.810 Hugepages 00:04:51.810 node hugesize free / total 00:04:51.810 node0 1048576kB 0 / 0 00:04:51.810 node0 2048kB 2048 / 2048 00:04:51.810 node1 1048576kB 0 / 0 00:04:51.810 node1 2048kB 0 / 0 00:04:51.810 00:04:51.810 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:51.810 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:51.810 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:51.810 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:51.810 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:51.810 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:51.810 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:51.810 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:51.810 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:51.810 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:51.810 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:51.810 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:51.810 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:51.810 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:51.810 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:51.810 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:51.810 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:52.068 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:52.068 11:31:19 -- spdk/autotest.sh@130 -- # uname -s 00:04:52.068 11:31:19 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:52.068 11:31:19 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:52.068 11:31:19 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:55.348 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:55.348 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:55.348 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:55.348 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:55.348 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:55.348 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:55.348 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:55.348 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:55.348 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:55.348 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:55.348 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:55.348 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:55.348 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:55.348 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:55.348 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:55.348 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:57.251 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:57.251 11:31:24 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:58.188 11:31:25 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:58.188 11:31:25 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:58.188 11:31:25 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:58.188 11:31:25 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:58.188 11:31:25 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:58.188 11:31:25 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:58.188 11:31:25 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.188 11:31:25 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:58.188 11:31:25 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:58.188 11:31:26 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:58.188 11:31:26 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:04:58.188 11:31:26 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:00.723 Waiting for block devices as requested 00:05:00.981 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:00.981 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:00.981 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:01.240 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:01.240 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:01.240 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:01.240 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:01.498 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:01.498 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:01.498 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:01.757 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:01.757 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:01.757 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:02.015 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:02.015 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:02.015 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:02.275 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:02.275 11:31:30 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:02.275 11:31:30 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:02.275 11:31:30 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:02.275 11:31:30 -- common/autotest_common.sh@1502 -- # grep 0000:d8:00.0/nvme/nvme 00:05:02.275 11:31:30 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:02.275 11:31:30 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:02.275 11:31:30 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:02.275 11:31:30 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:02.275 11:31:30 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:02.275 11:31:30 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:02.275 11:31:30 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:02.275 11:31:30 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:02.275 11:31:30 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:02.275 11:31:30 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:02.275 11:31:30 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:02.275 11:31:30 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:02.275 11:31:30 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:02.275 11:31:30 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:02.275 11:31:30 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:02.275 11:31:30 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:02.275 11:31:30 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:02.275 11:31:30 -- common/autotest_common.sh@1557 -- # continue 00:05:02.275 11:31:30 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:02.275 11:31:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:02.275 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:05:02.275 11:31:30 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:02.275 11:31:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:02.275 11:31:30 -- common/autotest_common.sh@10 -- # set +x 00:05:02.275 11:31:30 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:05.561 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:05.561 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:05.561 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:05.561 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:05.561 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:05.561 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:05.561 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:05.561 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:05.561 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:05.561 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:05.561 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:05.819 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:05.819 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:05.819 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:05.819 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:05.819 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:07.198 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:07.457 11:31:35 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:07.457 11:31:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:07.457 11:31:35 -- common/autotest_common.sh@10 -- # set +x 00:05:07.457 11:31:35 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:07.457 11:31:35 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:07.457 11:31:35 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:07.457 11:31:35 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:07.457 11:31:35 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:07.457 11:31:35 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:07.457 11:31:35 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:07.457 11:31:35 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:07.457 11:31:35 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:07.457 11:31:35 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:07.457 11:31:35 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:07.457 11:31:35 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:07.457 11:31:35 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:05:07.716 11:31:35 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:07.716 11:31:35 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:07.716 11:31:35 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:07.716 11:31:35 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:07.716 11:31:35 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:07.716 11:31:35 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:d8:00.0 00:05:07.716 11:31:35 -- common/autotest_common.sh@1592 -- # [[ -z 0000:d8:00.0 ]] 00:05:07.716 11:31:35 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1774437 00:05:07.716 11:31:35 -- common/autotest_common.sh@1598 -- # waitforlisten 1774437 00:05:07.716 11:31:35 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:07.716 11:31:35 -- common/autotest_common.sh@829 -- # '[' -z 1774437 ']' 00:05:07.716 11:31:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.716 11:31:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.716 11:31:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.716 11:31:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.716 11:31:35 -- common/autotest_common.sh@10 -- # set +x 00:05:07.716 [2024-07-15 11:31:35.629871] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:05:07.716 [2024-07-15 11:31:35.629926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1774437 ] 00:05:07.716 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.716 [2024-07-15 11:31:35.700083] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.716 [2024-07-15 11:31:35.773522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.651 11:31:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.651 11:31:36 -- common/autotest_common.sh@862 -- # return 0 00:05:08.652 11:31:36 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:08.652 11:31:36 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:08.652 11:31:36 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:11.936 nvme0n1 00:05:11.936 11:31:39 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:11.936 [2024-07-15 11:31:39.586405] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:11.936 request: 00:05:11.936 { 00:05:11.936 "nvme_ctrlr_name": "nvme0", 00:05:11.936 "password": "test", 00:05:11.936 "method": "bdev_nvme_opal_revert", 00:05:11.936 "req_id": 1 00:05:11.936 } 00:05:11.936 Got JSON-RPC error response 00:05:11.936 response: 00:05:11.936 { 00:05:11.936 "code": -32602, 00:05:11.936 "message": "Invalid parameters" 00:05:11.936 } 00:05:11.936 11:31:39 -- common/autotest_common.sh@1604 -- # true 00:05:11.936 11:31:39 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:11.936 11:31:39 -- common/autotest_common.sh@1608 -- # killprocess 1774437 00:05:11.936 11:31:39 -- common/autotest_common.sh@948 -- # '[' -z 1774437 ']' 00:05:11.936 11:31:39 -- common/autotest_common.sh@952 -- # kill -0 1774437 00:05:11.936 11:31:39 -- common/autotest_common.sh@953 -- # uname 00:05:11.936 11:31:39 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:11.936 11:31:39 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1774437 00:05:11.936 11:31:39 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:11.936 11:31:39 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:11.936 11:31:39 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1774437' 00:05:11.936 killing process with pid 1774437 00:05:11.936 11:31:39 -- common/autotest_common.sh@967 -- # kill 1774437 00:05:11.936 11:31:39 -- common/autotest_common.sh@972 -- # wait 1774437 00:05:13.839 11:31:41 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:13.839 11:31:41 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:13.839 11:31:41 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:13.839 11:31:41 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:13.839 11:31:41 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:13.839 11:31:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:13.839 11:31:41 -- common/autotest_common.sh@10 -- # set +x 00:05:13.839 11:31:41 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:13.839 11:31:41 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:13.839 11:31:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.839 11:31:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.839 11:31:41 -- common/autotest_common.sh@10 -- # set +x 00:05:13.839 ************************************ 00:05:13.839 START TEST env 00:05:13.839 ************************************ 00:05:13.839 11:31:41 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:14.098 * Looking for test storage... 00:05:14.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:14.098 11:31:41 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:14.098 11:31:41 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.098 11:31:41 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.098 11:31:41 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.098 ************************************ 00:05:14.098 START TEST env_memory 00:05:14.098 ************************************ 00:05:14.098 11:31:42 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:14.098 00:05:14.098 00:05:14.098 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.098 http://cunit.sourceforge.net/ 00:05:14.098 00:05:14.098 00:05:14.098 Suite: memory 00:05:14.098 Test: alloc and free memory map ...[2024-07-15 11:31:42.073521] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:14.098 passed 00:05:14.098 Test: mem map translation ...[2024-07-15 11:31:42.092188] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:14.098 [2024-07-15 11:31:42.092205] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:14.098 [2024-07-15 11:31:42.092242] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:14.098 [2024-07-15 11:31:42.092250] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:14.098 passed 00:05:14.098 Test: mem map registration ...[2024-07-15 11:31:42.128672] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:14.098 [2024-07-15 11:31:42.128690] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:14.098 passed 00:05:14.098 Test: mem map adjacent registrations ...passed 00:05:14.098 00:05:14.098 Run Summary: Type Total Ran Passed Failed Inactive 00:05:14.098 suites 1 1 n/a 0 0 00:05:14.098 tests 4 4 4 0 0 00:05:14.098 asserts 152 152 152 0 n/a 00:05:14.098 00:05:14.098 Elapsed time = 0.132 seconds 00:05:14.098 00:05:14.098 real 0m0.146s 00:05:14.098 user 0m0.136s 00:05:14.098 sys 0m0.010s 00:05:14.098 11:31:42 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.098 11:31:42 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:14.098 ************************************ 00:05:14.098 END TEST env_memory 00:05:14.098 ************************************ 00:05:14.358 11:31:42 env -- common/autotest_common.sh@1142 -- # return 0 00:05:14.358 11:31:42 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:14.358 11:31:42 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.358 11:31:42 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.358 11:31:42 env -- common/autotest_common.sh@10 -- # set +x 00:05:14.358 ************************************ 00:05:14.358 START TEST env_vtophys 00:05:14.358 ************************************ 00:05:14.358 11:31:42 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:14.358 EAL: lib.eal log level changed from notice to debug 00:05:14.358 EAL: Detected lcore 0 as core 0 on socket 0 00:05:14.358 EAL: Detected lcore 1 as core 1 on socket 0 00:05:14.358 EAL: Detected lcore 2 as core 2 on socket 0 00:05:14.358 EAL: Detected lcore 3 as core 3 on socket 0 00:05:14.358 EAL: Detected lcore 4 as core 4 on socket 0 00:05:14.358 EAL: Detected lcore 5 as core 5 on socket 0 00:05:14.358 EAL: Detected lcore 6 as core 6 on socket 0 00:05:14.358 EAL: Detected lcore 7 as core 8 on socket 0 00:05:14.358 EAL: Detected lcore 8 as core 9 on socket 0 00:05:14.358 EAL: Detected lcore 9 as core 10 on socket 0 00:05:14.358 EAL: Detected lcore 10 as core 11 on socket 0 00:05:14.358 EAL: Detected lcore 11 as core 12 on socket 0 00:05:14.358 EAL: Detected lcore 12 as core 13 on socket 0 00:05:14.358 EAL: Detected lcore 13 as core 14 on socket 0 00:05:14.358 EAL: Detected lcore 14 as core 16 on socket 0 00:05:14.358 EAL: Detected lcore 15 as core 17 on socket 0 00:05:14.358 EAL: Detected lcore 16 as core 18 on socket 0 00:05:14.358 EAL: Detected lcore 17 as core 19 on socket 0 00:05:14.358 EAL: Detected lcore 18 as core 20 on socket 0 00:05:14.358 EAL: Detected lcore 19 as core 21 on socket 0 00:05:14.358 EAL: Detected lcore 20 as core 22 on socket 0 00:05:14.358 EAL: Detected lcore 21 as core 24 on socket 0 00:05:14.358 EAL: Detected lcore 22 as core 25 on socket 0 00:05:14.358 EAL: Detected lcore 23 as core 26 on socket 0 00:05:14.358 EAL: Detected lcore 24 as core 27 on socket 0 00:05:14.358 EAL: Detected lcore 25 as core 28 on socket 0 00:05:14.358 EAL: Detected lcore 26 as core 29 on socket 0 00:05:14.358 EAL: Detected lcore 27 as core 30 on socket 0 00:05:14.358 EAL: Detected lcore 28 as core 0 on socket 1 00:05:14.358 EAL: Detected lcore 29 as core 1 on socket 1 00:05:14.358 EAL: Detected lcore 30 as core 2 on socket 1 00:05:14.358 EAL: Detected lcore 31 as core 3 on socket 1 00:05:14.358 EAL: Detected lcore 32 as core 4 on socket 1 00:05:14.358 EAL: Detected lcore 33 as core 5 on socket 1 00:05:14.358 EAL: Detected lcore 34 as core 6 on socket 1 00:05:14.358 EAL: Detected lcore 35 as core 8 on socket 1 00:05:14.358 EAL: Detected lcore 36 as core 9 on socket 1 00:05:14.358 EAL: Detected lcore 37 as core 10 on socket 1 00:05:14.358 EAL: Detected lcore 38 as core 11 on socket 1 00:05:14.358 EAL: Detected lcore 39 as core 12 on socket 1 00:05:14.358 EAL: Detected lcore 40 as core 13 on socket 1 00:05:14.358 EAL: Detected lcore 41 as core 14 on socket 1 00:05:14.358 EAL: Detected lcore 42 as core 16 on socket 1 00:05:14.358 EAL: Detected lcore 43 as core 17 on socket 1 00:05:14.358 EAL: Detected lcore 44 as core 18 on socket 1 00:05:14.358 EAL: Detected lcore 45 as core 19 on socket 1 00:05:14.358 EAL: Detected lcore 46 as core 20 on socket 1 00:05:14.358 EAL: Detected lcore 47 as core 21 on socket 1 00:05:14.358 EAL: Detected lcore 48 as core 22 on socket 1 00:05:14.358 EAL: Detected lcore 49 as core 24 on socket 1 00:05:14.358 EAL: Detected lcore 50 as core 25 on socket 1 00:05:14.358 EAL: Detected lcore 51 as core 26 on socket 1 00:05:14.358 EAL: Detected lcore 52 as core 27 on socket 1 00:05:14.358 EAL: Detected lcore 53 as core 28 on socket 1 00:05:14.358 EAL: Detected lcore 54 as core 29 on socket 1 00:05:14.358 EAL: Detected lcore 55 as core 30 on socket 1 00:05:14.358 EAL: Detected lcore 56 as core 0 on socket 0 00:05:14.358 EAL: Detected lcore 57 as core 1 on socket 0 00:05:14.358 EAL: Detected lcore 58 as core 2 on socket 0 00:05:14.358 EAL: Detected lcore 59 as core 3 on socket 0 00:05:14.358 EAL: Detected lcore 60 as core 4 on socket 0 00:05:14.358 EAL: Detected lcore 61 as core 5 on socket 0 00:05:14.358 EAL: Detected lcore 62 as core 6 on socket 0 00:05:14.358 EAL: Detected lcore 63 as core 8 on socket 0 00:05:14.358 EAL: Detected lcore 64 as core 9 on socket 0 00:05:14.358 EAL: Detected lcore 65 as core 10 on socket 0 00:05:14.358 EAL: Detected lcore 66 as core 11 on socket 0 00:05:14.358 EAL: Detected lcore 67 as core 12 on socket 0 00:05:14.358 EAL: Detected lcore 68 as core 13 on socket 0 00:05:14.358 EAL: Detected lcore 69 as core 14 on socket 0 00:05:14.358 EAL: Detected lcore 70 as core 16 on socket 0 00:05:14.358 EAL: Detected lcore 71 as core 17 on socket 0 00:05:14.358 EAL: Detected lcore 72 as core 18 on socket 0 00:05:14.358 EAL: Detected lcore 73 as core 19 on socket 0 00:05:14.358 EAL: Detected lcore 74 as core 20 on socket 0 00:05:14.358 EAL: Detected lcore 75 as core 21 on socket 0 00:05:14.358 EAL: Detected lcore 76 as core 22 on socket 0 00:05:14.358 EAL: Detected lcore 77 as core 24 on socket 0 00:05:14.358 EAL: Detected lcore 78 as core 25 on socket 0 00:05:14.358 EAL: Detected lcore 79 as core 26 on socket 0 00:05:14.358 EAL: Detected lcore 80 as core 27 on socket 0 00:05:14.358 EAL: Detected lcore 81 as core 28 on socket 0 00:05:14.358 EAL: Detected lcore 82 as core 29 on socket 0 00:05:14.358 EAL: Detected lcore 83 as core 30 on socket 0 00:05:14.358 EAL: Detected lcore 84 as core 0 on socket 1 00:05:14.358 EAL: Detected lcore 85 as core 1 on socket 1 00:05:14.358 EAL: Detected lcore 86 as core 2 on socket 1 00:05:14.358 EAL: Detected lcore 87 as core 3 on socket 1 00:05:14.358 EAL: Detected lcore 88 as core 4 on socket 1 00:05:14.358 EAL: Detected lcore 89 as core 5 on socket 1 00:05:14.358 EAL: Detected lcore 90 as core 6 on socket 1 00:05:14.358 EAL: Detected lcore 91 as core 8 on socket 1 00:05:14.358 EAL: Detected lcore 92 as core 9 on socket 1 00:05:14.358 EAL: Detected lcore 93 as core 10 on socket 1 00:05:14.358 EAL: Detected lcore 94 as core 11 on socket 1 00:05:14.358 EAL: Detected lcore 95 as core 12 on socket 1 00:05:14.358 EAL: Detected lcore 96 as core 13 on socket 1 00:05:14.358 EAL: Detected lcore 97 as core 14 on socket 1 00:05:14.358 EAL: Detected lcore 98 as core 16 on socket 1 00:05:14.358 EAL: Detected lcore 99 as core 17 on socket 1 00:05:14.358 EAL: Detected lcore 100 as core 18 on socket 1 00:05:14.358 EAL: Detected lcore 101 as core 19 on socket 1 00:05:14.358 EAL: Detected lcore 102 as core 20 on socket 1 00:05:14.358 EAL: Detected lcore 103 as core 21 on socket 1 00:05:14.358 EAL: Detected lcore 104 as core 22 on socket 1 00:05:14.358 EAL: Detected lcore 105 as core 24 on socket 1 00:05:14.358 EAL: Detected lcore 106 as core 25 on socket 1 00:05:14.359 EAL: Detected lcore 107 as core 26 on socket 1 00:05:14.359 EAL: Detected lcore 108 as core 27 on socket 1 00:05:14.359 EAL: Detected lcore 109 as core 28 on socket 1 00:05:14.359 EAL: Detected lcore 110 as core 29 on socket 1 00:05:14.359 EAL: Detected lcore 111 as core 30 on socket 1 00:05:14.359 EAL: Maximum logical cores by configuration: 128 00:05:14.359 EAL: Detected CPU lcores: 112 00:05:14.359 EAL: Detected NUMA nodes: 2 00:05:14.359 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:14.359 EAL: Detected shared linkage of DPDK 00:05:14.359 EAL: No shared files mode enabled, IPC will be disabled 00:05:14.359 EAL: Bus pci wants IOVA as 'DC' 00:05:14.359 EAL: Buses did not request a specific IOVA mode. 00:05:14.359 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:14.359 EAL: Selected IOVA mode 'VA' 00:05:14.359 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.359 EAL: Probing VFIO support... 00:05:14.359 EAL: IOMMU type 1 (Type 1) is supported 00:05:14.359 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:14.359 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:14.359 EAL: VFIO support initialized 00:05:14.359 EAL: Ask a virtual area of 0x2e000 bytes 00:05:14.359 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:14.359 EAL: Setting up physically contiguous memory... 00:05:14.359 EAL: Setting maximum number of open files to 524288 00:05:14.359 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:14.359 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:14.359 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:14.359 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.359 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:14.359 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.359 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.359 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:14.359 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:14.359 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.359 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:14.359 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.359 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.359 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:14.359 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:14.359 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.359 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:14.359 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.359 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.359 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:14.359 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:14.359 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.359 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:14.359 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:14.359 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.359 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:14.359 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:14.359 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:14.359 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.359 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:14.359 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:14.359 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.359 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:14.359 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:14.359 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.359 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:14.359 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:14.359 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.359 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:14.359 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:14.359 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.359 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:14.359 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:14.359 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.359 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:14.359 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:14.359 EAL: Ask a virtual area of 0x61000 bytes 00:05:14.359 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:14.359 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:14.359 EAL: Ask a virtual area of 0x400000000 bytes 00:05:14.359 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:14.359 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:14.359 EAL: Hugepages will be freed exactly as allocated. 00:05:14.359 EAL: No shared files mode enabled, IPC is disabled 00:05:14.359 EAL: No shared files mode enabled, IPC is disabled 00:05:14.359 EAL: TSC frequency is ~2500000 KHz 00:05:14.359 EAL: Main lcore 0 is ready (tid=7f4b7a825a00;cpuset=[0]) 00:05:14.359 EAL: Trying to obtain current memory policy. 00:05:14.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.359 EAL: Restoring previous memory policy: 0 00:05:14.359 EAL: request: mp_malloc_sync 00:05:14.359 EAL: No shared files mode enabled, IPC is disabled 00:05:14.359 EAL: Heap on socket 0 was expanded by 2MB 00:05:14.359 EAL: No shared files mode enabled, IPC is disabled 00:05:14.359 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:14.359 EAL: Mem event callback 'spdk:(nil)' registered 00:05:14.359 00:05:14.359 00:05:14.359 CUnit - A unit testing framework for C - Version 2.1-3 00:05:14.359 http://cunit.sourceforge.net/ 00:05:14.359 00:05:14.359 00:05:14.359 Suite: components_suite 00:05:14.359 Test: vtophys_malloc_test ...passed 00:05:14.359 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:14.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.359 EAL: Restoring previous memory policy: 4 00:05:14.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.359 EAL: request: mp_malloc_sync 00:05:14.359 EAL: No shared files mode enabled, IPC is disabled 00:05:14.359 EAL: Heap on socket 0 was expanded by 4MB 00:05:14.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.359 EAL: request: mp_malloc_sync 00:05:14.359 EAL: No shared files mode enabled, IPC is disabled 00:05:14.359 EAL: Heap on socket 0 was shrunk by 4MB 00:05:14.359 EAL: Trying to obtain current memory policy. 00:05:14.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.359 EAL: Restoring previous memory policy: 4 00:05:14.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.359 EAL: request: mp_malloc_sync 00:05:14.359 EAL: No shared files mode enabled, IPC is disabled 00:05:14.359 EAL: Heap on socket 0 was expanded by 6MB 00:05:14.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.359 EAL: request: mp_malloc_sync 00:05:14.359 EAL: No shared files mode enabled, IPC is disabled 00:05:14.359 EAL: Heap on socket 0 was shrunk by 6MB 00:05:14.359 EAL: Trying to obtain current memory policy. 00:05:14.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.359 EAL: Restoring previous memory policy: 4 00:05:14.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.359 EAL: request: mp_malloc_sync 00:05:14.359 EAL: No shared files mode enabled, IPC is disabled 00:05:14.359 EAL: Heap on socket 0 was expanded by 10MB 00:05:14.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.359 EAL: request: mp_malloc_sync 00:05:14.359 EAL: No shared files mode enabled, IPC is disabled 00:05:14.359 EAL: Heap on socket 0 was shrunk by 10MB 00:05:14.359 EAL: Trying to obtain current memory policy. 00:05:14.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.359 EAL: Restoring previous memory policy: 4 00:05:14.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.359 EAL: request: mp_malloc_sync 00:05:14.359 EAL: No shared files mode enabled, IPC is disabled 00:05:14.359 EAL: Heap on socket 0 was expanded by 18MB 00:05:14.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.359 EAL: request: mp_malloc_sync 00:05:14.359 EAL: No shared files mode enabled, IPC is disabled 00:05:14.359 EAL: Heap on socket 0 was shrunk by 18MB 00:05:14.359 EAL: Trying to obtain current memory policy. 00:05:14.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.359 EAL: Restoring previous memory policy: 4 00:05:14.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.359 EAL: request: mp_malloc_sync 00:05:14.359 EAL: No shared files mode enabled, IPC is disabled 00:05:14.359 EAL: Heap on socket 0 was expanded by 34MB 00:05:14.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.359 EAL: request: mp_malloc_sync 00:05:14.359 EAL: No shared files mode enabled, IPC is disabled 00:05:14.359 EAL: Heap on socket 0 was shrunk by 34MB 00:05:14.359 EAL: Trying to obtain current memory policy. 00:05:14.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.359 EAL: Restoring previous memory policy: 4 00:05:14.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.359 EAL: request: mp_malloc_sync 00:05:14.359 EAL: No shared files mode enabled, IPC is disabled 00:05:14.359 EAL: Heap on socket 0 was expanded by 66MB 00:05:14.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.359 EAL: request: mp_malloc_sync 00:05:14.359 EAL: No shared files mode enabled, IPC is disabled 00:05:14.359 EAL: Heap on socket 0 was shrunk by 66MB 00:05:14.359 EAL: Trying to obtain current memory policy. 00:05:14.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.359 EAL: Restoring previous memory policy: 4 00:05:14.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.359 EAL: request: mp_malloc_sync 00:05:14.359 EAL: No shared files mode enabled, IPC is disabled 00:05:14.359 EAL: Heap on socket 0 was expanded by 130MB 00:05:14.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.359 EAL: request: mp_malloc_sync 00:05:14.359 EAL: No shared files mode enabled, IPC is disabled 00:05:14.359 EAL: Heap on socket 0 was shrunk by 130MB 00:05:14.359 EAL: Trying to obtain current memory policy. 00:05:14.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.625 EAL: Restoring previous memory policy: 4 00:05:14.625 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.625 EAL: request: mp_malloc_sync 00:05:14.625 EAL: No shared files mode enabled, IPC is disabled 00:05:14.625 EAL: Heap on socket 0 was expanded by 258MB 00:05:14.625 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.625 EAL: request: mp_malloc_sync 00:05:14.625 EAL: No shared files mode enabled, IPC is disabled 00:05:14.625 EAL: Heap on socket 0 was shrunk by 258MB 00:05:14.625 EAL: Trying to obtain current memory policy. 00:05:14.625 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:14.625 EAL: Restoring previous memory policy: 4 00:05:14.625 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.625 EAL: request: mp_malloc_sync 00:05:14.625 EAL: No shared files mode enabled, IPC is disabled 00:05:14.625 EAL: Heap on socket 0 was expanded by 514MB 00:05:14.894 EAL: Calling mem event callback 'spdk:(nil)' 00:05:14.894 EAL: request: mp_malloc_sync 00:05:14.894 EAL: No shared files mode enabled, IPC is disabled 00:05:14.894 EAL: Heap on socket 0 was shrunk by 514MB 00:05:14.894 EAL: Trying to obtain current memory policy. 00:05:14.894 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:15.153 EAL: Restoring previous memory policy: 4 00:05:15.153 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.153 EAL: request: mp_malloc_sync 00:05:15.153 EAL: No shared files mode enabled, IPC is disabled 00:05:15.153 EAL: Heap on socket 0 was expanded by 1026MB 00:05:15.153 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.411 EAL: request: mp_malloc_sync 00:05:15.411 EAL: No shared files mode enabled, IPC is disabled 00:05:15.411 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:15.411 passed 00:05:15.411 00:05:15.411 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.411 suites 1 1 n/a 0 0 00:05:15.411 tests 2 2 2 0 0 00:05:15.411 asserts 497 497 497 0 n/a 00:05:15.411 00:05:15.411 Elapsed time = 0.960 seconds 00:05:15.411 EAL: Calling mem event callback 'spdk:(nil)' 00:05:15.411 EAL: request: mp_malloc_sync 00:05:15.411 EAL: No shared files mode enabled, IPC is disabled 00:05:15.411 EAL: Heap on socket 0 was shrunk by 2MB 00:05:15.411 EAL: No shared files mode enabled, IPC is disabled 00:05:15.411 EAL: No shared files mode enabled, IPC is disabled 00:05:15.411 EAL: No shared files mode enabled, IPC is disabled 00:05:15.411 00:05:15.411 real 0m1.091s 00:05:15.411 user 0m0.632s 00:05:15.411 sys 0m0.427s 00:05:15.411 11:31:43 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.411 11:31:43 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:15.411 ************************************ 00:05:15.411 END TEST env_vtophys 00:05:15.411 ************************************ 00:05:15.411 11:31:43 env -- common/autotest_common.sh@1142 -- # return 0 00:05:15.411 11:31:43 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:15.411 11:31:43 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.411 11:31:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.411 11:31:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.411 ************************************ 00:05:15.411 START TEST env_pci 00:05:15.411 ************************************ 00:05:15.411 11:31:43 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:15.411 00:05:15.411 00:05:15.411 CUnit - A unit testing framework for C - Version 2.1-3 00:05:15.411 http://cunit.sourceforge.net/ 00:05:15.411 00:05:15.411 00:05:15.411 Suite: pci 00:05:15.411 Test: pci_hook ...[2024-07-15 11:31:43.453171] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1775922 has claimed it 00:05:15.411 EAL: Cannot find device (10000:00:01.0) 00:05:15.411 EAL: Failed to attach device on primary process 00:05:15.411 passed 00:05:15.411 00:05:15.411 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.411 suites 1 1 n/a 0 0 00:05:15.411 tests 1 1 1 0 0 00:05:15.411 asserts 25 25 25 0 n/a 00:05:15.411 00:05:15.411 Elapsed time = 0.035 seconds 00:05:15.411 00:05:15.411 real 0m0.059s 00:05:15.411 user 0m0.019s 00:05:15.411 sys 0m0.040s 00:05:15.411 11:31:43 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.411 11:31:43 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:15.411 ************************************ 00:05:15.411 END TEST env_pci 00:05:15.411 ************************************ 00:05:15.670 11:31:43 env -- common/autotest_common.sh@1142 -- # return 0 00:05:15.670 11:31:43 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:15.670 11:31:43 env -- env/env.sh@15 -- # uname 00:05:15.670 11:31:43 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:15.670 11:31:43 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:15.670 11:31:43 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:15.670 11:31:43 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:15.670 11:31:43 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.670 11:31:43 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.670 ************************************ 00:05:15.670 START TEST env_dpdk_post_init 00:05:15.670 ************************************ 00:05:15.670 11:31:43 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:15.670 EAL: Detected CPU lcores: 112 00:05:15.670 EAL: Detected NUMA nodes: 2 00:05:15.670 EAL: Detected shared linkage of DPDK 00:05:15.670 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:15.670 EAL: Selected IOVA mode 'VA' 00:05:15.670 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.670 EAL: VFIO support initialized 00:05:15.670 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:15.670 EAL: Using IOMMU type 1 (Type 1) 00:05:15.670 EAL: Ignore mapping IO port bar(1) 00:05:15.670 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:15.670 EAL: Ignore mapping IO port bar(1) 00:05:15.670 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:15.670 EAL: Ignore mapping IO port bar(1) 00:05:15.670 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:15.670 EAL: Ignore mapping IO port bar(1) 00:05:15.670 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:15.670 EAL: Ignore mapping IO port bar(1) 00:05:15.670 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:15.929 EAL: Ignore mapping IO port bar(1) 00:05:15.929 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:15.929 EAL: Ignore mapping IO port bar(1) 00:05:15.929 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:15.929 EAL: Ignore mapping IO port bar(1) 00:05:15.929 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:15.929 EAL: Ignore mapping IO port bar(1) 00:05:15.929 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:15.929 EAL: Ignore mapping IO port bar(1) 00:05:15.929 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:15.929 EAL: Ignore mapping IO port bar(1) 00:05:15.929 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:15.929 EAL: Ignore mapping IO port bar(1) 00:05:15.929 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:15.929 EAL: Ignore mapping IO port bar(1) 00:05:15.929 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:15.929 EAL: Ignore mapping IO port bar(1) 00:05:15.929 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:15.929 EAL: Ignore mapping IO port bar(1) 00:05:15.929 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:15.929 EAL: Ignore mapping IO port bar(1) 00:05:15.929 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:16.863 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:05:20.145 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:05:20.145 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:05:20.711 Starting DPDK initialization... 00:05:20.711 Starting SPDK post initialization... 00:05:20.711 SPDK NVMe probe 00:05:20.711 Attaching to 0000:d8:00.0 00:05:20.711 Attached to 0000:d8:00.0 00:05:20.711 Cleaning up... 00:05:20.711 00:05:20.711 real 0m4.959s 00:05:20.711 user 0m3.683s 00:05:20.711 sys 0m0.327s 00:05:20.711 11:31:48 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.711 11:31:48 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:20.711 ************************************ 00:05:20.711 END TEST env_dpdk_post_init 00:05:20.711 ************************************ 00:05:20.711 11:31:48 env -- common/autotest_common.sh@1142 -- # return 0 00:05:20.711 11:31:48 env -- env/env.sh@26 -- # uname 00:05:20.711 11:31:48 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:20.711 11:31:48 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:20.711 11:31:48 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.711 11:31:48 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.711 11:31:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.711 ************************************ 00:05:20.711 START TEST env_mem_callbacks 00:05:20.711 ************************************ 00:05:20.711 11:31:48 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:20.711 EAL: Detected CPU lcores: 112 00:05:20.711 EAL: Detected NUMA nodes: 2 00:05:20.711 EAL: Detected shared linkage of DPDK 00:05:20.711 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:20.711 EAL: Selected IOVA mode 'VA' 00:05:20.711 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.711 EAL: VFIO support initialized 00:05:20.711 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:20.711 00:05:20.711 00:05:20.711 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.711 http://cunit.sourceforge.net/ 00:05:20.711 00:05:20.711 00:05:20.711 Suite: memory 00:05:20.711 Test: test ... 00:05:20.711 register 0x200000200000 2097152 00:05:20.711 malloc 3145728 00:05:20.711 register 0x200000400000 4194304 00:05:20.711 buf 0x200000500000 len 3145728 PASSED 00:05:20.711 malloc 64 00:05:20.711 buf 0x2000004fff40 len 64 PASSED 00:05:20.711 malloc 4194304 00:05:20.711 register 0x200000800000 6291456 00:05:20.711 buf 0x200000a00000 len 4194304 PASSED 00:05:20.711 free 0x200000500000 3145728 00:05:20.711 free 0x2000004fff40 64 00:05:20.711 unregister 0x200000400000 4194304 PASSED 00:05:20.711 free 0x200000a00000 4194304 00:05:20.711 unregister 0x200000800000 6291456 PASSED 00:05:20.711 malloc 8388608 00:05:20.711 register 0x200000400000 10485760 00:05:20.711 buf 0x200000600000 len 8388608 PASSED 00:05:20.711 free 0x200000600000 8388608 00:05:20.711 unregister 0x200000400000 10485760 PASSED 00:05:20.711 passed 00:05:20.711 00:05:20.711 Run Summary: Type Total Ran Passed Failed Inactive 00:05:20.711 suites 1 1 n/a 0 0 00:05:20.711 tests 1 1 1 0 0 00:05:20.711 asserts 15 15 15 0 n/a 00:05:20.711 00:05:20.711 Elapsed time = 0.006 seconds 00:05:20.711 00:05:20.711 real 0m0.068s 00:05:20.711 user 0m0.023s 00:05:20.711 sys 0m0.045s 00:05:20.711 11:31:48 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.711 11:31:48 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:20.711 ************************************ 00:05:20.711 END TEST env_mem_callbacks 00:05:20.711 ************************************ 00:05:20.711 11:31:48 env -- common/autotest_common.sh@1142 -- # return 0 00:05:20.711 00:05:20.711 real 0m6.843s 00:05:20.711 user 0m4.680s 00:05:20.711 sys 0m1.226s 00:05:20.711 11:31:48 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.711 11:31:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:20.711 ************************************ 00:05:20.711 END TEST env 00:05:20.711 ************************************ 00:05:20.711 11:31:48 -- common/autotest_common.sh@1142 -- # return 0 00:05:20.711 11:31:48 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:20.711 11:31:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.711 11:31:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.711 11:31:48 -- common/autotest_common.sh@10 -- # set +x 00:05:20.969 ************************************ 00:05:20.969 START TEST rpc 00:05:20.969 ************************************ 00:05:20.969 11:31:48 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:20.969 * Looking for test storage... 00:05:20.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:20.969 11:31:48 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1776897 00:05:20.969 11:31:48 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:20.969 11:31:48 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.969 11:31:48 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1776897 00:05:20.969 11:31:48 rpc -- common/autotest_common.sh@829 -- # '[' -z 1776897 ']' 00:05:20.969 11:31:48 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.969 11:31:48 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.969 11:31:48 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.969 11:31:48 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.969 11:31:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.969 [2024-07-15 11:31:48.972503] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:05:20.970 [2024-07-15 11:31:48.972554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1776897 ] 00:05:20.970 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.970 [2024-07-15 11:31:49.042509] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.228 [2024-07-15 11:31:49.117884] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:21.228 [2024-07-15 11:31:49.117920] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1776897' to capture a snapshot of events at runtime. 00:05:21.228 [2024-07-15 11:31:49.117930] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:21.228 [2024-07-15 11:31:49.117949] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:21.228 [2024-07-15 11:31:49.117955] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1776897 for offline analysis/debug. 00:05:21.228 [2024-07-15 11:31:49.117977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.794 11:31:49 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:21.794 11:31:49 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:21.794 11:31:49 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:21.794 11:31:49 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:21.794 11:31:49 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:21.794 11:31:49 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:21.794 11:31:49 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:21.794 11:31:49 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.794 11:31:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:21.794 ************************************ 00:05:21.794 START TEST rpc_integrity 00:05:21.794 ************************************ 00:05:21.794 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:21.794 11:31:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:21.794 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.794 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.794 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.794 11:31:49 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:21.794 11:31:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:21.795 11:31:49 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:21.795 11:31:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:21.795 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.795 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.795 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.795 11:31:49 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:21.795 11:31:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:21.795 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:21.795 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:21.795 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:21.795 11:31:49 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:21.795 { 00:05:21.795 "name": "Malloc0", 00:05:21.795 "aliases": [ 00:05:21.795 "79eef590-cd4a-4490-9607-41712c910194" 00:05:21.795 ], 00:05:21.795 "product_name": "Malloc disk", 00:05:21.795 "block_size": 512, 00:05:21.795 "num_blocks": 16384, 00:05:21.795 "uuid": "79eef590-cd4a-4490-9607-41712c910194", 00:05:21.795 "assigned_rate_limits": { 00:05:21.795 "rw_ios_per_sec": 0, 00:05:21.795 "rw_mbytes_per_sec": 0, 00:05:21.795 "r_mbytes_per_sec": 0, 00:05:21.795 "w_mbytes_per_sec": 0 00:05:21.795 }, 00:05:21.795 "claimed": false, 00:05:21.795 "zoned": false, 00:05:21.795 "supported_io_types": { 00:05:21.795 "read": true, 00:05:21.795 "write": true, 00:05:21.795 "unmap": true, 00:05:21.795 "flush": true, 00:05:21.795 "reset": true, 00:05:21.795 "nvme_admin": false, 00:05:21.795 "nvme_io": false, 00:05:21.795 "nvme_io_md": false, 00:05:21.795 "write_zeroes": true, 00:05:21.795 "zcopy": true, 00:05:21.795 "get_zone_info": false, 00:05:21.795 "zone_management": false, 00:05:21.795 "zone_append": false, 00:05:21.795 "compare": false, 00:05:21.795 "compare_and_write": false, 00:05:21.795 "abort": true, 00:05:21.795 "seek_hole": false, 00:05:21.795 "seek_data": false, 00:05:21.795 "copy": true, 00:05:21.795 "nvme_iov_md": false 00:05:21.795 }, 00:05:21.795 "memory_domains": [ 00:05:21.795 { 00:05:21.795 "dma_device_id": "system", 00:05:21.795 "dma_device_type": 1 00:05:21.795 }, 00:05:21.795 { 00:05:21.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:21.795 "dma_device_type": 2 00:05:21.795 } 00:05:21.795 ], 00:05:21.795 "driver_specific": {} 00:05:21.795 } 00:05:21.795 ]' 00:05:21.795 11:31:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:21.795 11:31:49 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:21.795 11:31:49 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:22.053 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.053 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.053 [2024-07-15 11:31:49.903296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:22.053 [2024-07-15 11:31:49.903325] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:22.053 [2024-07-15 11:31:49.903339] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x22c0440 00:05:22.053 [2024-07-15 11:31:49.903347] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:22.053 [2024-07-15 11:31:49.904407] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:22.053 [2024-07-15 11:31:49.904428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:22.053 Passthru0 00:05:22.053 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.053 11:31:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:22.053 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.053 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.053 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.053 11:31:49 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:22.053 { 00:05:22.053 "name": "Malloc0", 00:05:22.053 "aliases": [ 00:05:22.053 "79eef590-cd4a-4490-9607-41712c910194" 00:05:22.053 ], 00:05:22.053 "product_name": "Malloc disk", 00:05:22.053 "block_size": 512, 00:05:22.053 "num_blocks": 16384, 00:05:22.053 "uuid": "79eef590-cd4a-4490-9607-41712c910194", 00:05:22.053 "assigned_rate_limits": { 00:05:22.053 "rw_ios_per_sec": 0, 00:05:22.053 "rw_mbytes_per_sec": 0, 00:05:22.053 "r_mbytes_per_sec": 0, 00:05:22.053 "w_mbytes_per_sec": 0 00:05:22.053 }, 00:05:22.053 "claimed": true, 00:05:22.053 "claim_type": "exclusive_write", 00:05:22.053 "zoned": false, 00:05:22.053 "supported_io_types": { 00:05:22.053 "read": true, 00:05:22.053 "write": true, 00:05:22.053 "unmap": true, 00:05:22.053 "flush": true, 00:05:22.053 "reset": true, 00:05:22.053 "nvme_admin": false, 00:05:22.053 "nvme_io": false, 00:05:22.053 "nvme_io_md": false, 00:05:22.053 "write_zeroes": true, 00:05:22.053 "zcopy": true, 00:05:22.053 "get_zone_info": false, 00:05:22.053 "zone_management": false, 00:05:22.053 "zone_append": false, 00:05:22.053 "compare": false, 00:05:22.053 "compare_and_write": false, 00:05:22.053 "abort": true, 00:05:22.053 "seek_hole": false, 00:05:22.053 "seek_data": false, 00:05:22.053 "copy": true, 00:05:22.053 "nvme_iov_md": false 00:05:22.053 }, 00:05:22.053 "memory_domains": [ 00:05:22.053 { 00:05:22.053 "dma_device_id": "system", 00:05:22.053 "dma_device_type": 1 00:05:22.053 }, 00:05:22.053 { 00:05:22.053 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.053 "dma_device_type": 2 00:05:22.053 } 00:05:22.053 ], 00:05:22.053 "driver_specific": {} 00:05:22.053 }, 00:05:22.053 { 00:05:22.053 "name": "Passthru0", 00:05:22.053 "aliases": [ 00:05:22.053 "70340055-0e62-5169-9439-8ec14ca05dbc" 00:05:22.053 ], 00:05:22.053 "product_name": "passthru", 00:05:22.053 "block_size": 512, 00:05:22.053 "num_blocks": 16384, 00:05:22.053 "uuid": "70340055-0e62-5169-9439-8ec14ca05dbc", 00:05:22.053 "assigned_rate_limits": { 00:05:22.053 "rw_ios_per_sec": 0, 00:05:22.053 "rw_mbytes_per_sec": 0, 00:05:22.053 "r_mbytes_per_sec": 0, 00:05:22.053 "w_mbytes_per_sec": 0 00:05:22.053 }, 00:05:22.053 "claimed": false, 00:05:22.053 "zoned": false, 00:05:22.053 "supported_io_types": { 00:05:22.053 "read": true, 00:05:22.053 "write": true, 00:05:22.053 "unmap": true, 00:05:22.053 "flush": true, 00:05:22.053 "reset": true, 00:05:22.053 "nvme_admin": false, 00:05:22.053 "nvme_io": false, 00:05:22.053 "nvme_io_md": false, 00:05:22.053 "write_zeroes": true, 00:05:22.053 "zcopy": true, 00:05:22.053 "get_zone_info": false, 00:05:22.053 "zone_management": false, 00:05:22.053 "zone_append": false, 00:05:22.053 "compare": false, 00:05:22.053 "compare_and_write": false, 00:05:22.054 "abort": true, 00:05:22.054 "seek_hole": false, 00:05:22.054 "seek_data": false, 00:05:22.054 "copy": true, 00:05:22.054 "nvme_iov_md": false 00:05:22.054 }, 00:05:22.054 "memory_domains": [ 00:05:22.054 { 00:05:22.054 "dma_device_id": "system", 00:05:22.054 "dma_device_type": 1 00:05:22.054 }, 00:05:22.054 { 00:05:22.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.054 "dma_device_type": 2 00:05:22.054 } 00:05:22.054 ], 00:05:22.054 "driver_specific": { 00:05:22.054 "passthru": { 00:05:22.054 "name": "Passthru0", 00:05:22.054 "base_bdev_name": "Malloc0" 00:05:22.054 } 00:05:22.054 } 00:05:22.054 } 00:05:22.054 ]' 00:05:22.054 11:31:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:22.054 11:31:49 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:22.054 11:31:49 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:22.054 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.054 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.054 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.054 11:31:49 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:22.054 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.054 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.054 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.054 11:31:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:22.054 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.054 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.054 11:31:49 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.054 11:31:49 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:22.054 11:31:49 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:22.054 11:31:50 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:22.054 00:05:22.054 real 0m0.257s 00:05:22.054 user 0m0.165s 00:05:22.054 sys 0m0.029s 00:05:22.054 11:31:50 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.054 11:31:50 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.054 ************************************ 00:05:22.054 END TEST rpc_integrity 00:05:22.054 ************************************ 00:05:22.054 11:31:50 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:22.054 11:31:50 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:22.054 11:31:50 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.054 11:31:50 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.054 11:31:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.054 ************************************ 00:05:22.054 START TEST rpc_plugins 00:05:22.054 ************************************ 00:05:22.054 11:31:50 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:22.054 11:31:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:22.054 11:31:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.054 11:31:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:22.054 11:31:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.054 11:31:50 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:22.054 11:31:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:22.054 11:31:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.054 11:31:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:22.054 11:31:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.054 11:31:50 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:22.054 { 00:05:22.054 "name": "Malloc1", 00:05:22.054 "aliases": [ 00:05:22.054 "69a20b3d-edfe-45d2-9ec8-700b32e25966" 00:05:22.054 ], 00:05:22.054 "product_name": "Malloc disk", 00:05:22.054 "block_size": 4096, 00:05:22.054 "num_blocks": 256, 00:05:22.054 "uuid": "69a20b3d-edfe-45d2-9ec8-700b32e25966", 00:05:22.054 "assigned_rate_limits": { 00:05:22.054 "rw_ios_per_sec": 0, 00:05:22.054 "rw_mbytes_per_sec": 0, 00:05:22.054 "r_mbytes_per_sec": 0, 00:05:22.054 "w_mbytes_per_sec": 0 00:05:22.054 }, 00:05:22.054 "claimed": false, 00:05:22.054 "zoned": false, 00:05:22.054 "supported_io_types": { 00:05:22.054 "read": true, 00:05:22.054 "write": true, 00:05:22.054 "unmap": true, 00:05:22.054 "flush": true, 00:05:22.054 "reset": true, 00:05:22.054 "nvme_admin": false, 00:05:22.054 "nvme_io": false, 00:05:22.054 "nvme_io_md": false, 00:05:22.054 "write_zeroes": true, 00:05:22.054 "zcopy": true, 00:05:22.054 "get_zone_info": false, 00:05:22.054 "zone_management": false, 00:05:22.054 "zone_append": false, 00:05:22.054 "compare": false, 00:05:22.054 "compare_and_write": false, 00:05:22.054 "abort": true, 00:05:22.054 "seek_hole": false, 00:05:22.054 "seek_data": false, 00:05:22.054 "copy": true, 00:05:22.054 "nvme_iov_md": false 00:05:22.054 }, 00:05:22.054 "memory_domains": [ 00:05:22.054 { 00:05:22.054 "dma_device_id": "system", 00:05:22.054 "dma_device_type": 1 00:05:22.054 }, 00:05:22.054 { 00:05:22.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.054 "dma_device_type": 2 00:05:22.054 } 00:05:22.054 ], 00:05:22.054 "driver_specific": {} 00:05:22.054 } 00:05:22.054 ]' 00:05:22.054 11:31:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:22.312 11:31:50 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:22.312 11:31:50 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:22.312 11:31:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.312 11:31:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:22.312 11:31:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.312 11:31:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:22.312 11:31:50 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.312 11:31:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:22.312 11:31:50 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.312 11:31:50 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:22.312 11:31:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:22.312 11:31:50 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:22.312 00:05:22.312 real 0m0.144s 00:05:22.312 user 0m0.083s 00:05:22.312 sys 0m0.024s 00:05:22.312 11:31:50 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.312 11:31:50 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:22.312 ************************************ 00:05:22.312 END TEST rpc_plugins 00:05:22.312 ************************************ 00:05:22.312 11:31:50 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:22.312 11:31:50 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:22.312 11:31:50 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.312 11:31:50 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.312 11:31:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.312 ************************************ 00:05:22.312 START TEST rpc_trace_cmd_test 00:05:22.312 ************************************ 00:05:22.312 11:31:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:22.312 11:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:22.312 11:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:22.312 11:31:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.312 11:31:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:22.312 11:31:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.312 11:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:22.312 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1776897", 00:05:22.312 "tpoint_group_mask": "0x8", 00:05:22.312 "iscsi_conn": { 00:05:22.312 "mask": "0x2", 00:05:22.312 "tpoint_mask": "0x0" 00:05:22.312 }, 00:05:22.312 "scsi": { 00:05:22.312 "mask": "0x4", 00:05:22.312 "tpoint_mask": "0x0" 00:05:22.312 }, 00:05:22.312 "bdev": { 00:05:22.312 "mask": "0x8", 00:05:22.312 "tpoint_mask": "0xffffffffffffffff" 00:05:22.312 }, 00:05:22.312 "nvmf_rdma": { 00:05:22.312 "mask": "0x10", 00:05:22.312 "tpoint_mask": "0x0" 00:05:22.312 }, 00:05:22.312 "nvmf_tcp": { 00:05:22.312 "mask": "0x20", 00:05:22.312 "tpoint_mask": "0x0" 00:05:22.312 }, 00:05:22.312 "ftl": { 00:05:22.312 "mask": "0x40", 00:05:22.312 "tpoint_mask": "0x0" 00:05:22.312 }, 00:05:22.312 "blobfs": { 00:05:22.312 "mask": "0x80", 00:05:22.312 "tpoint_mask": "0x0" 00:05:22.312 }, 00:05:22.312 "dsa": { 00:05:22.312 "mask": "0x200", 00:05:22.312 "tpoint_mask": "0x0" 00:05:22.312 }, 00:05:22.312 "thread": { 00:05:22.312 "mask": "0x400", 00:05:22.312 "tpoint_mask": "0x0" 00:05:22.312 }, 00:05:22.312 "nvme_pcie": { 00:05:22.312 "mask": "0x800", 00:05:22.312 "tpoint_mask": "0x0" 00:05:22.312 }, 00:05:22.312 "iaa": { 00:05:22.312 "mask": "0x1000", 00:05:22.312 "tpoint_mask": "0x0" 00:05:22.312 }, 00:05:22.313 "nvme_tcp": { 00:05:22.313 "mask": "0x2000", 00:05:22.313 "tpoint_mask": "0x0" 00:05:22.313 }, 00:05:22.313 "bdev_nvme": { 00:05:22.313 "mask": "0x4000", 00:05:22.313 "tpoint_mask": "0x0" 00:05:22.313 }, 00:05:22.313 "sock": { 00:05:22.313 "mask": "0x8000", 00:05:22.313 "tpoint_mask": "0x0" 00:05:22.313 } 00:05:22.313 }' 00:05:22.313 11:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:22.313 11:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:22.313 11:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:22.570 11:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:22.570 11:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:22.571 11:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:22.571 11:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:22.571 11:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:22.571 11:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:22.571 11:31:50 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:22.571 00:05:22.571 real 0m0.209s 00:05:22.571 user 0m0.171s 00:05:22.571 sys 0m0.028s 00:05:22.571 11:31:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.571 11:31:50 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:22.571 ************************************ 00:05:22.571 END TEST rpc_trace_cmd_test 00:05:22.571 ************************************ 00:05:22.571 11:31:50 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:22.571 11:31:50 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:22.571 11:31:50 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:22.571 11:31:50 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:22.571 11:31:50 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.571 11:31:50 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.571 11:31:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.571 ************************************ 00:05:22.571 START TEST rpc_daemon_integrity 00:05:22.571 ************************************ 00:05:22.571 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:22.571 11:31:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:22.571 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.571 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.571 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.571 11:31:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:22.571 11:31:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:22.829 11:31:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:22.829 11:31:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:22.829 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.829 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.829 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.829 11:31:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:22.829 11:31:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:22.829 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.829 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.829 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.829 11:31:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:22.829 { 00:05:22.829 "name": "Malloc2", 00:05:22.829 "aliases": [ 00:05:22.829 "69c9682e-48a4-43c0-8710-3dbece5b1802" 00:05:22.829 ], 00:05:22.829 "product_name": "Malloc disk", 00:05:22.829 "block_size": 512, 00:05:22.829 "num_blocks": 16384, 00:05:22.829 "uuid": "69c9682e-48a4-43c0-8710-3dbece5b1802", 00:05:22.829 "assigned_rate_limits": { 00:05:22.829 "rw_ios_per_sec": 0, 00:05:22.829 "rw_mbytes_per_sec": 0, 00:05:22.829 "r_mbytes_per_sec": 0, 00:05:22.829 "w_mbytes_per_sec": 0 00:05:22.829 }, 00:05:22.829 "claimed": false, 00:05:22.829 "zoned": false, 00:05:22.829 "supported_io_types": { 00:05:22.829 "read": true, 00:05:22.829 "write": true, 00:05:22.829 "unmap": true, 00:05:22.829 "flush": true, 00:05:22.829 "reset": true, 00:05:22.829 "nvme_admin": false, 00:05:22.829 "nvme_io": false, 00:05:22.829 "nvme_io_md": false, 00:05:22.829 "write_zeroes": true, 00:05:22.829 "zcopy": true, 00:05:22.829 "get_zone_info": false, 00:05:22.829 "zone_management": false, 00:05:22.829 "zone_append": false, 00:05:22.829 "compare": false, 00:05:22.829 "compare_and_write": false, 00:05:22.829 "abort": true, 00:05:22.829 "seek_hole": false, 00:05:22.829 "seek_data": false, 00:05:22.829 "copy": true, 00:05:22.829 "nvme_iov_md": false 00:05:22.829 }, 00:05:22.829 "memory_domains": [ 00:05:22.829 { 00:05:22.829 "dma_device_id": "system", 00:05:22.829 "dma_device_type": 1 00:05:22.829 }, 00:05:22.829 { 00:05:22.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.829 "dma_device_type": 2 00:05:22.829 } 00:05:22.829 ], 00:05:22.829 "driver_specific": {} 00:05:22.829 } 00:05:22.829 ]' 00:05:22.829 11:31:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:22.829 11:31:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:22.829 11:31:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:22.829 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.829 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.829 [2024-07-15 11:31:50.733526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:22.829 [2024-07-15 11:31:50.733555] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:22.829 [2024-07-15 11:31:50.733568] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2465e70 00:05:22.829 [2024-07-15 11:31:50.733576] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:22.829 [2024-07-15 11:31:50.734494] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:22.829 [2024-07-15 11:31:50.734518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:22.829 Passthru0 00:05:22.829 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.829 11:31:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:22.829 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.829 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.829 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.829 11:31:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:22.829 { 00:05:22.829 "name": "Malloc2", 00:05:22.829 "aliases": [ 00:05:22.829 "69c9682e-48a4-43c0-8710-3dbece5b1802" 00:05:22.829 ], 00:05:22.829 "product_name": "Malloc disk", 00:05:22.829 "block_size": 512, 00:05:22.829 "num_blocks": 16384, 00:05:22.829 "uuid": "69c9682e-48a4-43c0-8710-3dbece5b1802", 00:05:22.829 "assigned_rate_limits": { 00:05:22.829 "rw_ios_per_sec": 0, 00:05:22.829 "rw_mbytes_per_sec": 0, 00:05:22.829 "r_mbytes_per_sec": 0, 00:05:22.829 "w_mbytes_per_sec": 0 00:05:22.829 }, 00:05:22.829 "claimed": true, 00:05:22.829 "claim_type": "exclusive_write", 00:05:22.829 "zoned": false, 00:05:22.829 "supported_io_types": { 00:05:22.829 "read": true, 00:05:22.829 "write": true, 00:05:22.830 "unmap": true, 00:05:22.830 "flush": true, 00:05:22.830 "reset": true, 00:05:22.830 "nvme_admin": false, 00:05:22.830 "nvme_io": false, 00:05:22.830 "nvme_io_md": false, 00:05:22.830 "write_zeroes": true, 00:05:22.830 "zcopy": true, 00:05:22.830 "get_zone_info": false, 00:05:22.830 "zone_management": false, 00:05:22.830 "zone_append": false, 00:05:22.830 "compare": false, 00:05:22.830 "compare_and_write": false, 00:05:22.830 "abort": true, 00:05:22.830 "seek_hole": false, 00:05:22.830 "seek_data": false, 00:05:22.830 "copy": true, 00:05:22.830 "nvme_iov_md": false 00:05:22.830 }, 00:05:22.830 "memory_domains": [ 00:05:22.830 { 00:05:22.830 "dma_device_id": "system", 00:05:22.830 "dma_device_type": 1 00:05:22.830 }, 00:05:22.830 { 00:05:22.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.830 "dma_device_type": 2 00:05:22.830 } 00:05:22.830 ], 00:05:22.830 "driver_specific": {} 00:05:22.830 }, 00:05:22.830 { 00:05:22.830 "name": "Passthru0", 00:05:22.830 "aliases": [ 00:05:22.830 "e31340c1-17a8-57a1-aee7-3f3360401803" 00:05:22.830 ], 00:05:22.830 "product_name": "passthru", 00:05:22.830 "block_size": 512, 00:05:22.830 "num_blocks": 16384, 00:05:22.830 "uuid": "e31340c1-17a8-57a1-aee7-3f3360401803", 00:05:22.830 "assigned_rate_limits": { 00:05:22.830 "rw_ios_per_sec": 0, 00:05:22.830 "rw_mbytes_per_sec": 0, 00:05:22.830 "r_mbytes_per_sec": 0, 00:05:22.830 "w_mbytes_per_sec": 0 00:05:22.830 }, 00:05:22.830 "claimed": false, 00:05:22.830 "zoned": false, 00:05:22.830 "supported_io_types": { 00:05:22.830 "read": true, 00:05:22.830 "write": true, 00:05:22.830 "unmap": true, 00:05:22.830 "flush": true, 00:05:22.830 "reset": true, 00:05:22.830 "nvme_admin": false, 00:05:22.830 "nvme_io": false, 00:05:22.830 "nvme_io_md": false, 00:05:22.830 "write_zeroes": true, 00:05:22.830 "zcopy": true, 00:05:22.830 "get_zone_info": false, 00:05:22.830 "zone_management": false, 00:05:22.830 "zone_append": false, 00:05:22.830 "compare": false, 00:05:22.830 "compare_and_write": false, 00:05:22.830 "abort": true, 00:05:22.830 "seek_hole": false, 00:05:22.830 "seek_data": false, 00:05:22.830 "copy": true, 00:05:22.830 "nvme_iov_md": false 00:05:22.830 }, 00:05:22.830 "memory_domains": [ 00:05:22.830 { 00:05:22.830 "dma_device_id": "system", 00:05:22.830 "dma_device_type": 1 00:05:22.830 }, 00:05:22.830 { 00:05:22.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:22.830 "dma_device_type": 2 00:05:22.830 } 00:05:22.830 ], 00:05:22.830 "driver_specific": { 00:05:22.830 "passthru": { 00:05:22.830 "name": "Passthru0", 00:05:22.830 "base_bdev_name": "Malloc2" 00:05:22.830 } 00:05:22.830 } 00:05:22.830 } 00:05:22.830 ]' 00:05:22.830 11:31:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:22.830 11:31:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:22.830 11:31:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:22.830 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.830 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.830 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.830 11:31:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:22.830 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.830 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.830 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.830 11:31:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:22.830 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.830 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.830 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:22.830 11:31:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:22.830 11:31:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:22.830 11:31:50 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:22.830 00:05:22.830 real 0m0.259s 00:05:22.830 user 0m0.159s 00:05:22.830 sys 0m0.041s 00:05:22.830 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.830 11:31:50 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:22.830 ************************************ 00:05:22.830 END TEST rpc_daemon_integrity 00:05:22.830 ************************************ 00:05:22.830 11:31:50 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:22.830 11:31:50 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:22.830 11:31:50 rpc -- rpc/rpc.sh@84 -- # killprocess 1776897 00:05:22.830 11:31:50 rpc -- common/autotest_common.sh@948 -- # '[' -z 1776897 ']' 00:05:22.830 11:31:50 rpc -- common/autotest_common.sh@952 -- # kill -0 1776897 00:05:22.830 11:31:50 rpc -- common/autotest_common.sh@953 -- # uname 00:05:22.830 11:31:50 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.830 11:31:50 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1776897 00:05:23.088 11:31:50 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.088 11:31:50 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.088 11:31:50 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1776897' 00:05:23.088 killing process with pid 1776897 00:05:23.088 11:31:50 rpc -- common/autotest_common.sh@967 -- # kill 1776897 00:05:23.088 11:31:50 rpc -- common/autotest_common.sh@972 -- # wait 1776897 00:05:23.347 00:05:23.347 real 0m2.452s 00:05:23.347 user 0m3.053s 00:05:23.347 sys 0m0.767s 00:05:23.347 11:31:51 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.347 11:31:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.347 ************************************ 00:05:23.347 END TEST rpc 00:05:23.347 ************************************ 00:05:23.347 11:31:51 -- common/autotest_common.sh@1142 -- # return 0 00:05:23.347 11:31:51 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:23.347 11:31:51 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.347 11:31:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.347 11:31:51 -- common/autotest_common.sh@10 -- # set +x 00:05:23.347 ************************************ 00:05:23.347 START TEST skip_rpc 00:05:23.347 ************************************ 00:05:23.347 11:31:51 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:23.605 * Looking for test storage... 00:05:23.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:23.605 11:31:51 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:23.605 11:31:51 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:23.605 11:31:51 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:23.605 11:31:51 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.605 11:31:51 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.605 11:31:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.605 ************************************ 00:05:23.605 START TEST skip_rpc 00:05:23.605 ************************************ 00:05:23.605 11:31:51 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:23.605 11:31:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1777600 00:05:23.605 11:31:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.605 11:31:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:23.605 11:31:51 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:23.605 [2024-07-15 11:31:51.567788] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:05:23.605 [2024-07-15 11:31:51.567842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1777600 ] 00:05:23.605 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.605 [2024-07-15 11:31:51.636652] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.605 [2024-07-15 11:31:51.705739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1777600 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1777600 ']' 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1777600 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1777600 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1777600' 00:05:28.868 killing process with pid 1777600 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1777600 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1777600 00:05:28.868 00:05:28.868 real 0m5.373s 00:05:28.868 user 0m5.125s 00:05:28.868 sys 0m0.288s 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.868 11:31:56 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.868 ************************************ 00:05:28.868 END TEST skip_rpc 00:05:28.868 ************************************ 00:05:28.868 11:31:56 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:28.868 11:31:56 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:28.868 11:31:56 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.868 11:31:56 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.868 11:31:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.868 ************************************ 00:05:28.868 START TEST skip_rpc_with_json 00:05:28.868 ************************************ 00:05:28.868 11:31:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:28.868 11:31:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:28.868 11:31:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1778457 00:05:28.868 11:31:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.868 11:31:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1778457 00:05:28.868 11:31:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1778457 ']' 00:05:28.868 11:31:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.868 11:31:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.868 11:31:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.868 11:31:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.868 11:31:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:28.868 11:31:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.126 [2024-07-15 11:31:57.020035] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:05:29.126 [2024-07-15 11:31:57.020080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1778457 ] 00:05:29.126 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.126 [2024-07-15 11:31:57.088320] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.126 [2024-07-15 11:31:57.161995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.059 11:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:30.059 11:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:30.059 11:31:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:30.059 11:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.059 11:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.059 [2024-07-15 11:31:57.806433] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:30.059 request: 00:05:30.059 { 00:05:30.059 "trtype": "tcp", 00:05:30.059 "method": "nvmf_get_transports", 00:05:30.059 "req_id": 1 00:05:30.059 } 00:05:30.059 Got JSON-RPC error response 00:05:30.059 response: 00:05:30.059 { 00:05:30.059 "code": -19, 00:05:30.059 "message": "No such device" 00:05:30.059 } 00:05:30.060 11:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:30.060 11:31:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:30.060 11:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.060 11:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.060 [2024-07-15 11:31:57.814508] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:30.060 11:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.060 11:31:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:30.060 11:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:30.060 11:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.060 11:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:30.060 11:31:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:30.060 { 00:05:30.060 "subsystems": [ 00:05:30.060 { 00:05:30.060 "subsystem": "vfio_user_target", 00:05:30.060 "config": null 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "subsystem": "keyring", 00:05:30.060 "config": [] 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "subsystem": "iobuf", 00:05:30.060 "config": [ 00:05:30.060 { 00:05:30.060 "method": "iobuf_set_options", 00:05:30.060 "params": { 00:05:30.060 "small_pool_count": 8192, 00:05:30.060 "large_pool_count": 1024, 00:05:30.060 "small_bufsize": 8192, 00:05:30.060 "large_bufsize": 135168 00:05:30.060 } 00:05:30.060 } 00:05:30.060 ] 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "subsystem": "sock", 00:05:30.060 "config": [ 00:05:30.060 { 00:05:30.060 "method": "sock_set_default_impl", 00:05:30.060 "params": { 00:05:30.060 "impl_name": "posix" 00:05:30.060 } 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "method": "sock_impl_set_options", 00:05:30.060 "params": { 00:05:30.060 "impl_name": "ssl", 00:05:30.060 "recv_buf_size": 4096, 00:05:30.060 "send_buf_size": 4096, 00:05:30.060 "enable_recv_pipe": true, 00:05:30.060 "enable_quickack": false, 00:05:30.060 "enable_placement_id": 0, 00:05:30.060 "enable_zerocopy_send_server": true, 00:05:30.060 "enable_zerocopy_send_client": false, 00:05:30.060 "zerocopy_threshold": 0, 00:05:30.060 "tls_version": 0, 00:05:30.060 "enable_ktls": false 00:05:30.060 } 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "method": "sock_impl_set_options", 00:05:30.060 "params": { 00:05:30.060 "impl_name": "posix", 00:05:30.060 "recv_buf_size": 2097152, 00:05:30.060 "send_buf_size": 2097152, 00:05:30.060 "enable_recv_pipe": true, 00:05:30.060 "enable_quickack": false, 00:05:30.060 "enable_placement_id": 0, 00:05:30.060 "enable_zerocopy_send_server": true, 00:05:30.060 "enable_zerocopy_send_client": false, 00:05:30.060 "zerocopy_threshold": 0, 00:05:30.060 "tls_version": 0, 00:05:30.060 "enable_ktls": false 00:05:30.060 } 00:05:30.060 } 00:05:30.060 ] 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "subsystem": "vmd", 00:05:30.060 "config": [] 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "subsystem": "accel", 00:05:30.060 "config": [ 00:05:30.060 { 00:05:30.060 "method": "accel_set_options", 00:05:30.060 "params": { 00:05:30.060 "small_cache_size": 128, 00:05:30.060 "large_cache_size": 16, 00:05:30.060 "task_count": 2048, 00:05:30.060 "sequence_count": 2048, 00:05:30.060 "buf_count": 2048 00:05:30.060 } 00:05:30.060 } 00:05:30.060 ] 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "subsystem": "bdev", 00:05:30.060 "config": [ 00:05:30.060 { 00:05:30.060 "method": "bdev_set_options", 00:05:30.060 "params": { 00:05:30.060 "bdev_io_pool_size": 65535, 00:05:30.060 "bdev_io_cache_size": 256, 00:05:30.060 "bdev_auto_examine": true, 00:05:30.060 "iobuf_small_cache_size": 128, 00:05:30.060 "iobuf_large_cache_size": 16 00:05:30.060 } 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "method": "bdev_raid_set_options", 00:05:30.060 "params": { 00:05:30.060 "process_window_size_kb": 1024 00:05:30.060 } 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "method": "bdev_iscsi_set_options", 00:05:30.060 "params": { 00:05:30.060 "timeout_sec": 30 00:05:30.060 } 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "method": "bdev_nvme_set_options", 00:05:30.060 "params": { 00:05:30.060 "action_on_timeout": "none", 00:05:30.060 "timeout_us": 0, 00:05:30.060 "timeout_admin_us": 0, 00:05:30.060 "keep_alive_timeout_ms": 10000, 00:05:30.060 "arbitration_burst": 0, 00:05:30.060 "low_priority_weight": 0, 00:05:30.060 "medium_priority_weight": 0, 00:05:30.060 "high_priority_weight": 0, 00:05:30.060 "nvme_adminq_poll_period_us": 10000, 00:05:30.060 "nvme_ioq_poll_period_us": 0, 00:05:30.060 "io_queue_requests": 0, 00:05:30.060 "delay_cmd_submit": true, 00:05:30.060 "transport_retry_count": 4, 00:05:30.060 "bdev_retry_count": 3, 00:05:30.060 "transport_ack_timeout": 0, 00:05:30.060 "ctrlr_loss_timeout_sec": 0, 00:05:30.060 "reconnect_delay_sec": 0, 00:05:30.060 "fast_io_fail_timeout_sec": 0, 00:05:30.060 "disable_auto_failback": false, 00:05:30.060 "generate_uuids": false, 00:05:30.060 "transport_tos": 0, 00:05:30.060 "nvme_error_stat": false, 00:05:30.060 "rdma_srq_size": 0, 00:05:30.060 "io_path_stat": false, 00:05:30.060 "allow_accel_sequence": false, 00:05:30.060 "rdma_max_cq_size": 0, 00:05:30.060 "rdma_cm_event_timeout_ms": 0, 00:05:30.060 "dhchap_digests": [ 00:05:30.060 "sha256", 00:05:30.060 "sha384", 00:05:30.060 "sha512" 00:05:30.060 ], 00:05:30.060 "dhchap_dhgroups": [ 00:05:30.060 "null", 00:05:30.060 "ffdhe2048", 00:05:30.060 "ffdhe3072", 00:05:30.060 "ffdhe4096", 00:05:30.060 "ffdhe6144", 00:05:30.060 "ffdhe8192" 00:05:30.060 ] 00:05:30.060 } 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "method": "bdev_nvme_set_hotplug", 00:05:30.060 "params": { 00:05:30.060 "period_us": 100000, 00:05:30.060 "enable": false 00:05:30.060 } 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "method": "bdev_wait_for_examine" 00:05:30.060 } 00:05:30.060 ] 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "subsystem": "scsi", 00:05:30.060 "config": null 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "subsystem": "scheduler", 00:05:30.060 "config": [ 00:05:30.060 { 00:05:30.060 "method": "framework_set_scheduler", 00:05:30.060 "params": { 00:05:30.060 "name": "static" 00:05:30.060 } 00:05:30.060 } 00:05:30.060 ] 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "subsystem": "vhost_scsi", 00:05:30.060 "config": [] 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "subsystem": "vhost_blk", 00:05:30.060 "config": [] 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "subsystem": "ublk", 00:05:30.060 "config": [] 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "subsystem": "nbd", 00:05:30.060 "config": [] 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "subsystem": "nvmf", 00:05:30.060 "config": [ 00:05:30.060 { 00:05:30.060 "method": "nvmf_set_config", 00:05:30.060 "params": { 00:05:30.060 "discovery_filter": "match_any", 00:05:30.060 "admin_cmd_passthru": { 00:05:30.060 "identify_ctrlr": false 00:05:30.060 } 00:05:30.060 } 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "method": "nvmf_set_max_subsystems", 00:05:30.060 "params": { 00:05:30.060 "max_subsystems": 1024 00:05:30.060 } 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "method": "nvmf_set_crdt", 00:05:30.060 "params": { 00:05:30.060 "crdt1": 0, 00:05:30.060 "crdt2": 0, 00:05:30.060 "crdt3": 0 00:05:30.060 } 00:05:30.060 }, 00:05:30.060 { 00:05:30.060 "method": "nvmf_create_transport", 00:05:30.060 "params": { 00:05:30.060 "trtype": "TCP", 00:05:30.060 "max_queue_depth": 128, 00:05:30.060 "max_io_qpairs_per_ctrlr": 127, 00:05:30.060 "in_capsule_data_size": 4096, 00:05:30.060 "max_io_size": 131072, 00:05:30.060 "io_unit_size": 131072, 00:05:30.060 "max_aq_depth": 128, 00:05:30.060 "num_shared_buffers": 511, 00:05:30.060 "buf_cache_size": 4294967295, 00:05:30.060 "dif_insert_or_strip": false, 00:05:30.060 "zcopy": false, 00:05:30.060 "c2h_success": true, 00:05:30.060 "sock_priority": 0, 00:05:30.060 "abort_timeout_sec": 1, 00:05:30.060 "ack_timeout": 0, 00:05:30.061 "data_wr_pool_size": 0 00:05:30.061 } 00:05:30.061 } 00:05:30.061 ] 00:05:30.061 }, 00:05:30.061 { 00:05:30.061 "subsystem": "iscsi", 00:05:30.061 "config": [ 00:05:30.061 { 00:05:30.061 "method": "iscsi_set_options", 00:05:30.061 "params": { 00:05:30.061 "node_base": "iqn.2016-06.io.spdk", 00:05:30.061 "max_sessions": 128, 00:05:30.061 "max_connections_per_session": 2, 00:05:30.061 "max_queue_depth": 64, 00:05:30.061 "default_time2wait": 2, 00:05:30.061 "default_time2retain": 20, 00:05:30.061 "first_burst_length": 8192, 00:05:30.061 "immediate_data": true, 00:05:30.061 "allow_duplicated_isid": false, 00:05:30.061 "error_recovery_level": 0, 00:05:30.061 "nop_timeout": 60, 00:05:30.061 "nop_in_interval": 30, 00:05:30.061 "disable_chap": false, 00:05:30.061 "require_chap": false, 00:05:30.061 "mutual_chap": false, 00:05:30.061 "chap_group": 0, 00:05:30.061 "max_large_datain_per_connection": 64, 00:05:30.061 "max_r2t_per_connection": 4, 00:05:30.061 "pdu_pool_size": 36864, 00:05:30.061 "immediate_data_pool_size": 16384, 00:05:30.061 "data_out_pool_size": 2048 00:05:30.061 } 00:05:30.061 } 00:05:30.061 ] 00:05:30.061 } 00:05:30.061 ] 00:05:30.061 } 00:05:30.061 11:31:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:30.061 11:31:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1778457 00:05:30.061 11:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1778457 ']' 00:05:30.061 11:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1778457 00:05:30.061 11:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:30.061 11:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.061 11:31:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1778457 00:05:30.061 11:31:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.061 11:31:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.061 11:31:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1778457' 00:05:30.061 killing process with pid 1778457 00:05:30.061 11:31:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1778457 00:05:30.061 11:31:58 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1778457 00:05:30.319 11:31:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1778704 00:05:30.319 11:31:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:30.319 11:31:58 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:35.571 11:32:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1778704 00:05:35.571 11:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1778704 ']' 00:05:35.571 11:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1778704 00:05:35.571 11:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:35.571 11:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.571 11:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1778704 00:05:35.571 11:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:35.571 11:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:35.571 11:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1778704' 00:05:35.571 killing process with pid 1778704 00:05:35.571 11:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1778704 00:05:35.571 11:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1778704 00:05:35.828 11:32:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:35.828 11:32:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:35.828 00:05:35.828 real 0m6.745s 00:05:35.828 user 0m6.508s 00:05:35.828 sys 0m0.650s 00:05:35.828 11:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.828 11:32:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:35.828 ************************************ 00:05:35.828 END TEST skip_rpc_with_json 00:05:35.828 ************************************ 00:05:35.828 11:32:03 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:35.828 11:32:03 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:35.828 11:32:03 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.828 11:32:03 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.829 11:32:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.829 ************************************ 00:05:35.829 START TEST skip_rpc_with_delay 00:05:35.829 ************************************ 00:05:35.829 11:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:35.829 11:32:03 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:35.829 11:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:35.829 11:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:35.829 11:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.829 11:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:35.829 11:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.829 11:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:35.829 11:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.829 11:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:35.829 11:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.829 11:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:35.829 11:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:35.829 [2024-07-15 11:32:03.839929] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:35.829 [2024-07-15 11:32:03.839995] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:35.829 11:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:35.829 11:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:35.829 11:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:35.829 11:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:35.829 00:05:35.829 real 0m0.069s 00:05:35.829 user 0m0.043s 00:05:35.829 sys 0m0.026s 00:05:35.829 11:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.829 11:32:03 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:35.829 ************************************ 00:05:35.829 END TEST skip_rpc_with_delay 00:05:35.829 ************************************ 00:05:35.829 11:32:03 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:35.829 11:32:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:35.829 11:32:03 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:35.829 11:32:03 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:35.829 11:32:03 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.829 11:32:03 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.829 11:32:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.829 ************************************ 00:05:35.829 START TEST exit_on_failed_rpc_init 00:05:35.829 ************************************ 00:05:35.829 11:32:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:35.829 11:32:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1779803 00:05:35.829 11:32:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1779803 00:05:35.829 11:32:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1779803 ']' 00:05:35.829 11:32:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.829 11:32:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.829 11:32:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.829 11:32:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.829 11:32:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.829 11:32:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:36.087 [2024-07-15 11:32:03.984291] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:05:36.087 [2024-07-15 11:32:03.984338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1779803 ] 00:05:36.087 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.087 [2024-07-15 11:32:04.054169] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.087 [2024-07-15 11:32:04.127457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.017 11:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.017 11:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:37.017 11:32:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.017 11:32:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:37.017 11:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:37.017 11:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:37.017 11:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:37.017 11:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.017 11:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:37.017 11:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.017 11:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:37.017 11:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:37.017 11:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:37.017 11:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:37.017 11:32:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:37.017 [2024-07-15 11:32:04.813373] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:05:37.017 [2024-07-15 11:32:04.813425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1779891 ] 00:05:37.017 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.017 [2024-07-15 11:32:04.881196] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.017 [2024-07-15 11:32:04.951003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.017 [2024-07-15 11:32:04.951071] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:37.017 [2024-07-15 11:32:04.951082] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:37.017 [2024-07-15 11:32:04.951090] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:37.017 11:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:37.017 11:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:37.017 11:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:37.017 11:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:37.017 11:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:37.017 11:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:37.017 11:32:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:37.017 11:32:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1779803 00:05:37.017 11:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1779803 ']' 00:05:37.017 11:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1779803 00:05:37.017 11:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:37.017 11:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:37.017 11:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1779803 00:05:37.017 11:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:37.017 11:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:37.017 11:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1779803' 00:05:37.017 killing process with pid 1779803 00:05:37.017 11:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1779803 00:05:37.017 11:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1779803 00:05:37.274 00:05:37.274 real 0m1.435s 00:05:37.274 user 0m1.608s 00:05:37.274 sys 0m0.430s 00:05:37.274 11:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.274 11:32:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:37.274 ************************************ 00:05:37.274 END TEST exit_on_failed_rpc_init 00:05:37.274 ************************************ 00:05:37.531 11:32:05 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:37.531 11:32:05 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:37.531 00:05:37.531 real 0m14.038s 00:05:37.531 user 0m13.432s 00:05:37.531 sys 0m1.693s 00:05:37.531 11:32:05 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.531 11:32:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.531 ************************************ 00:05:37.531 END TEST skip_rpc 00:05:37.531 ************************************ 00:05:37.531 11:32:05 -- common/autotest_common.sh@1142 -- # return 0 00:05:37.531 11:32:05 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:37.531 11:32:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.531 11:32:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.531 11:32:05 -- common/autotest_common.sh@10 -- # set +x 00:05:37.531 ************************************ 00:05:37.531 START TEST rpc_client 00:05:37.531 ************************************ 00:05:37.531 11:32:05 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:37.531 * Looking for test storage... 00:05:37.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:37.531 11:32:05 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:37.531 OK 00:05:37.531 11:32:05 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:37.531 00:05:37.531 real 0m0.130s 00:05:37.531 user 0m0.062s 00:05:37.531 sys 0m0.078s 00:05:37.531 11:32:05 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.531 11:32:05 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:37.531 ************************************ 00:05:37.531 END TEST rpc_client 00:05:37.531 ************************************ 00:05:37.789 11:32:05 -- common/autotest_common.sh@1142 -- # return 0 00:05:37.789 11:32:05 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:37.789 11:32:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:37.789 11:32:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.789 11:32:05 -- common/autotest_common.sh@10 -- # set +x 00:05:37.789 ************************************ 00:05:37.789 START TEST json_config 00:05:37.789 ************************************ 00:05:37.789 11:32:05 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:37.789 11:32:05 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:37.789 11:32:05 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:37.789 11:32:05 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:37.789 11:32:05 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:37.789 11:32:05 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.789 11:32:05 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.789 11:32:05 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.789 11:32:05 json_config -- paths/export.sh@5 -- # export PATH 00:05:37.789 11:32:05 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@47 -- # : 0 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:37.789 11:32:05 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:37.789 11:32:05 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:37.789 11:32:05 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:37.789 11:32:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:37.789 11:32:05 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:37.789 11:32:05 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:37.789 11:32:05 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:37.789 11:32:05 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:37.789 11:32:05 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:37.789 11:32:05 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:37.789 11:32:05 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:37.789 11:32:05 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:37.789 11:32:05 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:37.789 11:32:05 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:37.789 11:32:05 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:37.789 11:32:05 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:37.789 11:32:05 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:37.789 INFO: JSON configuration test init 00:05:37.789 11:32:05 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:37.789 11:32:05 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:37.789 11:32:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:37.789 11:32:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.789 11:32:05 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:37.789 11:32:05 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:37.789 11:32:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.789 11:32:05 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:37.789 11:32:05 json_config -- json_config/common.sh@9 -- # local app=target 00:05:37.789 11:32:05 json_config -- json_config/common.sh@10 -- # shift 00:05:37.789 11:32:05 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:37.789 11:32:05 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:37.789 11:32:05 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:37.789 11:32:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.789 11:32:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.789 11:32:05 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1780188 00:05:37.789 11:32:05 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:37.789 Waiting for target to run... 00:05:37.789 11:32:05 json_config -- json_config/common.sh@25 -- # waitforlisten 1780188 /var/tmp/spdk_tgt.sock 00:05:37.789 11:32:05 json_config -- common/autotest_common.sh@829 -- # '[' -z 1780188 ']' 00:05:37.789 11:32:05 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:37.789 11:32:05 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:37.789 11:32:05 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.789 11:32:05 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:37.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:37.789 11:32:05 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.789 11:32:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.789 [2024-07-15 11:32:05.875638] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:05:37.789 [2024-07-15 11:32:05.875687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1780188 ] 00:05:38.045 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.301 [2024-07-15 11:32:06.305002] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.301 [2024-07-15 11:32:06.389118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.558 11:32:06 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.558 11:32:06 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:38.558 11:32:06 json_config -- json_config/common.sh@26 -- # echo '' 00:05:38.558 00:05:38.558 11:32:06 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:38.558 11:32:06 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:38.558 11:32:06 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:38.558 11:32:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.558 11:32:06 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:38.558 11:32:06 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:38.558 11:32:06 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:38.815 11:32:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.815 11:32:06 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:38.815 11:32:06 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:38.815 11:32:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:42.121 11:32:09 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:42.121 11:32:09 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:42.121 11:32:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.121 11:32:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.121 11:32:09 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:42.121 11:32:09 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:42.121 11:32:09 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:42.121 11:32:09 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:42.121 11:32:09 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:42.121 11:32:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:42.121 11:32:09 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:42.121 11:32:09 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:42.121 11:32:09 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:42.121 11:32:09 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:42.121 11:32:09 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:42.121 11:32:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.121 11:32:09 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:42.121 11:32:09 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:42.121 11:32:09 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:42.121 11:32:09 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:42.121 11:32:09 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:42.121 11:32:09 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:42.121 11:32:09 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:42.121 11:32:09 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.121 11:32:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.121 11:32:10 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:42.121 11:32:10 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:42.121 11:32:10 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:42.121 11:32:10 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:42.121 11:32:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:42.121 MallocForNvmf0 00:05:42.122 11:32:10 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:42.122 11:32:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:42.379 MallocForNvmf1 00:05:42.379 11:32:10 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:42.379 11:32:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:42.636 [2024-07-15 11:32:10.487303] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:42.636 11:32:10 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:42.637 11:32:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:42.637 11:32:10 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:42.637 11:32:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:42.896 11:32:10 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:42.896 11:32:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:43.154 11:32:11 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:43.154 11:32:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:43.154 [2024-07-15 11:32:11.161459] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:43.154 11:32:11 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:43.154 11:32:11 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:43.154 11:32:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.154 11:32:11 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:43.154 11:32:11 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:43.154 11:32:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.412 11:32:11 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:43.412 11:32:11 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:43.412 11:32:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:43.412 MallocBdevForConfigChangeCheck 00:05:43.412 11:32:11 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:43.412 11:32:11 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:43.412 11:32:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.412 11:32:11 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:43.412 11:32:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:43.670 11:32:11 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:43.670 INFO: shutting down applications... 00:05:43.670 11:32:11 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:43.670 11:32:11 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:43.670 11:32:11 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:43.670 11:32:11 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:46.203 Calling clear_iscsi_subsystem 00:05:46.203 Calling clear_nvmf_subsystem 00:05:46.203 Calling clear_nbd_subsystem 00:05:46.203 Calling clear_ublk_subsystem 00:05:46.203 Calling clear_vhost_blk_subsystem 00:05:46.203 Calling clear_vhost_scsi_subsystem 00:05:46.203 Calling clear_bdev_subsystem 00:05:46.203 11:32:13 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:46.203 11:32:13 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:46.203 11:32:13 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:46.203 11:32:13 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:46.203 11:32:13 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:46.203 11:32:13 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:46.203 11:32:14 json_config -- json_config/json_config.sh@345 -- # break 00:05:46.203 11:32:14 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:46.203 11:32:14 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:46.203 11:32:14 json_config -- json_config/common.sh@31 -- # local app=target 00:05:46.203 11:32:14 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:46.203 11:32:14 json_config -- json_config/common.sh@35 -- # [[ -n 1780188 ]] 00:05:46.203 11:32:14 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1780188 00:05:46.203 11:32:14 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:46.203 11:32:14 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.203 11:32:14 json_config -- json_config/common.sh@41 -- # kill -0 1780188 00:05:46.203 11:32:14 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:46.772 11:32:14 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:46.772 11:32:14 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.772 11:32:14 json_config -- json_config/common.sh@41 -- # kill -0 1780188 00:05:46.772 11:32:14 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:46.772 11:32:14 json_config -- json_config/common.sh@43 -- # break 00:05:46.772 11:32:14 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:46.772 11:32:14 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:46.772 SPDK target shutdown done 00:05:46.772 11:32:14 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:46.772 INFO: relaunching applications... 00:05:46.772 11:32:14 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.772 11:32:14 json_config -- json_config/common.sh@9 -- # local app=target 00:05:46.772 11:32:14 json_config -- json_config/common.sh@10 -- # shift 00:05:46.772 11:32:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:46.772 11:32:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:46.772 11:32:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:46.772 11:32:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.772 11:32:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.772 11:32:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1781899 00:05:46.772 11:32:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:46.772 Waiting for target to run... 00:05:46.772 11:32:14 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.772 11:32:14 json_config -- json_config/common.sh@25 -- # waitforlisten 1781899 /var/tmp/spdk_tgt.sock 00:05:46.772 11:32:14 json_config -- common/autotest_common.sh@829 -- # '[' -z 1781899 ']' 00:05:46.772 11:32:14 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.772 11:32:14 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.772 11:32:14 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.772 11:32:14 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.772 11:32:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.772 [2024-07-15 11:32:14.725598] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:05:46.772 [2024-07-15 11:32:14.725654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1781899 ] 00:05:46.772 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.341 [2024-07-15 11:32:15.159877] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.341 [2024-07-15 11:32:15.246758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.631 [2024-07-15 11:32:18.279820] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:50.631 [2024-07-15 11:32:18.312196] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:50.889 11:32:18 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.889 11:32:18 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:50.889 11:32:18 json_config -- json_config/common.sh@26 -- # echo '' 00:05:50.889 00:05:50.889 11:32:18 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:50.889 11:32:18 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:50.889 INFO: Checking if target configuration is the same... 00:05:50.889 11:32:18 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.889 11:32:18 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:50.889 11:32:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.889 + '[' 2 -ne 2 ']' 00:05:50.889 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:50.889 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:50.889 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:50.889 +++ basename /dev/fd/62 00:05:50.889 ++ mktemp /tmp/62.XXX 00:05:50.889 + tmp_file_1=/tmp/62.793 00:05:50.889 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:50.889 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:50.889 + tmp_file_2=/tmp/spdk_tgt_config.json.rzp 00:05:50.889 + ret=0 00:05:50.889 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:51.147 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:51.147 + diff -u /tmp/62.793 /tmp/spdk_tgt_config.json.rzp 00:05:51.147 + echo 'INFO: JSON config files are the same' 00:05:51.147 INFO: JSON config files are the same 00:05:51.147 + rm /tmp/62.793 /tmp/spdk_tgt_config.json.rzp 00:05:51.147 + exit 0 00:05:51.147 11:32:19 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:51.147 11:32:19 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:51.147 INFO: changing configuration and checking if this can be detected... 00:05:51.147 11:32:19 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:51.147 11:32:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:51.405 11:32:19 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:51.405 11:32:19 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:51.405 11:32:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:51.405 + '[' 2 -ne 2 ']' 00:05:51.405 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:51.405 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:51.405 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:51.405 +++ basename /dev/fd/62 00:05:51.405 ++ mktemp /tmp/62.XXX 00:05:51.405 + tmp_file_1=/tmp/62.l8I 00:05:51.405 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:51.405 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:51.405 + tmp_file_2=/tmp/spdk_tgt_config.json.4kC 00:05:51.405 + ret=0 00:05:51.405 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:51.663 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:51.663 + diff -u /tmp/62.l8I /tmp/spdk_tgt_config.json.4kC 00:05:51.663 + ret=1 00:05:51.663 + echo '=== Start of file: /tmp/62.l8I ===' 00:05:51.663 + cat /tmp/62.l8I 00:05:51.663 + echo '=== End of file: /tmp/62.l8I ===' 00:05:51.663 + echo '' 00:05:51.663 + echo '=== Start of file: /tmp/spdk_tgt_config.json.4kC ===' 00:05:51.663 + cat /tmp/spdk_tgt_config.json.4kC 00:05:51.663 + echo '=== End of file: /tmp/spdk_tgt_config.json.4kC ===' 00:05:51.663 + echo '' 00:05:51.663 + rm /tmp/62.l8I /tmp/spdk_tgt_config.json.4kC 00:05:51.663 + exit 1 00:05:51.663 11:32:19 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:51.663 INFO: configuration change detected. 00:05:51.663 11:32:19 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:51.663 11:32:19 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:51.663 11:32:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:51.663 11:32:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.663 11:32:19 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:51.663 11:32:19 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:51.663 11:32:19 json_config -- json_config/json_config.sh@317 -- # [[ -n 1781899 ]] 00:05:51.663 11:32:19 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:51.663 11:32:19 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:51.663 11:32:19 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:51.663 11:32:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.663 11:32:19 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:51.663 11:32:19 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:51.663 11:32:19 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:51.663 11:32:19 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:51.663 11:32:19 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:51.663 11:32:19 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:51.663 11:32:19 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:51.663 11:32:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.922 11:32:19 json_config -- json_config/json_config.sh@323 -- # killprocess 1781899 00:05:51.922 11:32:19 json_config -- common/autotest_common.sh@948 -- # '[' -z 1781899 ']' 00:05:51.922 11:32:19 json_config -- common/autotest_common.sh@952 -- # kill -0 1781899 00:05:51.922 11:32:19 json_config -- common/autotest_common.sh@953 -- # uname 00:05:51.922 11:32:19 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.922 11:32:19 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1781899 00:05:51.922 11:32:19 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.922 11:32:19 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.922 11:32:19 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1781899' 00:05:51.922 killing process with pid 1781899 00:05:51.922 11:32:19 json_config -- common/autotest_common.sh@967 -- # kill 1781899 00:05:51.922 11:32:19 json_config -- common/autotest_common.sh@972 -- # wait 1781899 00:05:53.830 11:32:21 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:53.830 11:32:21 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:53.830 11:32:21 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:53.830 11:32:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.830 11:32:21 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:53.830 11:32:21 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:53.830 INFO: Success 00:05:53.830 00:05:53.830 real 0m16.204s 00:05:53.830 user 0m16.475s 00:05:53.830 sys 0m2.338s 00:05:53.830 11:32:21 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.830 11:32:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.830 ************************************ 00:05:53.830 END TEST json_config 00:05:53.830 ************************************ 00:05:54.090 11:32:21 -- common/autotest_common.sh@1142 -- # return 0 00:05:54.090 11:32:21 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:54.090 11:32:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.090 11:32:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.090 11:32:21 -- common/autotest_common.sh@10 -- # set +x 00:05:54.090 ************************************ 00:05:54.090 START TEST json_config_extra_key 00:05:54.090 ************************************ 00:05:54.090 11:32:21 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:54.090 11:32:22 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:54.090 11:32:22 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:54.090 11:32:22 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:54.090 11:32:22 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:54.090 11:32:22 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.090 11:32:22 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.090 11:32:22 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.090 11:32:22 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:54.090 11:32:22 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:54.090 11:32:22 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:54.091 11:32:22 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:54.091 11:32:22 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:54.091 11:32:22 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:54.091 11:32:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:54.091 11:32:22 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:54.091 11:32:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:54.091 11:32:22 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:54.091 11:32:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:54.091 11:32:22 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:54.091 11:32:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:54.091 11:32:22 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:54.091 11:32:22 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:54.091 11:32:22 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:54.091 INFO: launching applications... 00:05:54.091 11:32:22 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:54.091 11:32:22 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:54.091 11:32:22 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:54.091 11:32:22 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:54.091 11:32:22 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:54.091 11:32:22 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:54.091 11:32:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.091 11:32:22 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.091 11:32:22 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1783332 00:05:54.091 11:32:22 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:54.091 Waiting for target to run... 00:05:54.091 11:32:22 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1783332 /var/tmp/spdk_tgt.sock 00:05:54.091 11:32:22 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1783332 ']' 00:05:54.091 11:32:22 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:54.091 11:32:22 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:54.091 11:32:22 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.091 11:32:22 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:54.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:54.091 11:32:22 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.091 11:32:22 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:54.091 [2024-07-15 11:32:22.151819] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:05:54.091 [2024-07-15 11:32:22.151879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1783332 ] 00:05:54.091 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.661 [2024-07-15 11:32:22.592540] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.661 [2024-07-15 11:32:22.681389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.920 11:32:22 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.920 11:32:22 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:54.920 11:32:22 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:54.920 00:05:54.920 11:32:22 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:54.920 INFO: shutting down applications... 00:05:54.920 11:32:22 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:54.920 11:32:22 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:54.920 11:32:22 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:54.920 11:32:22 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1783332 ]] 00:05:54.920 11:32:22 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1783332 00:05:54.920 11:32:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:54.920 11:32:22 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:54.920 11:32:22 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1783332 00:05:54.920 11:32:22 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:55.489 11:32:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:55.489 11:32:23 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:55.489 11:32:23 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1783332 00:05:55.489 11:32:23 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:55.489 11:32:23 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:55.489 11:32:23 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:55.489 11:32:23 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:55.489 SPDK target shutdown done 00:05:55.489 11:32:23 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:55.489 Success 00:05:55.489 00:05:55.489 real 0m1.460s 00:05:55.489 user 0m1.042s 00:05:55.489 sys 0m0.560s 00:05:55.489 11:32:23 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.489 11:32:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:55.489 ************************************ 00:05:55.489 END TEST json_config_extra_key 00:05:55.489 ************************************ 00:05:55.489 11:32:23 -- common/autotest_common.sh@1142 -- # return 0 00:05:55.489 11:32:23 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:55.489 11:32:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.489 11:32:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.489 11:32:23 -- common/autotest_common.sh@10 -- # set +x 00:05:55.489 ************************************ 00:05:55.489 START TEST alias_rpc 00:05:55.489 ************************************ 00:05:55.489 11:32:23 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:55.748 * Looking for test storage... 00:05:55.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:55.748 11:32:23 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:55.748 11:32:23 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1783641 00:05:55.748 11:32:23 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1783641 00:05:55.748 11:32:23 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:55.748 11:32:23 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1783641 ']' 00:05:55.748 11:32:23 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.748 11:32:23 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.748 11:32:23 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.748 11:32:23 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.748 11:32:23 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.748 [2024-07-15 11:32:23.693572] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:05:55.748 [2024-07-15 11:32:23.693624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1783641 ] 00:05:55.748 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.748 [2024-07-15 11:32:23.763313] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.748 [2024-07-15 11:32:23.836985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.687 11:32:24 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.687 11:32:24 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:56.687 11:32:24 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:56.687 11:32:24 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1783641 00:05:56.687 11:32:24 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1783641 ']' 00:05:56.687 11:32:24 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1783641 00:05:56.687 11:32:24 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:56.687 11:32:24 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.687 11:32:24 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1783641 00:05:56.687 11:32:24 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.687 11:32:24 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.687 11:32:24 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1783641' 00:05:56.687 killing process with pid 1783641 00:05:56.687 11:32:24 alias_rpc -- common/autotest_common.sh@967 -- # kill 1783641 00:05:56.687 11:32:24 alias_rpc -- common/autotest_common.sh@972 -- # wait 1783641 00:05:56.946 00:05:56.946 real 0m1.502s 00:05:56.946 user 0m1.601s 00:05:56.946 sys 0m0.448s 00:05:56.946 11:32:25 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.946 11:32:25 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.946 ************************************ 00:05:56.946 END TEST alias_rpc 00:05:56.946 ************************************ 00:05:57.206 11:32:25 -- common/autotest_common.sh@1142 -- # return 0 00:05:57.206 11:32:25 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:57.206 11:32:25 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:57.206 11:32:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.206 11:32:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.206 11:32:25 -- common/autotest_common.sh@10 -- # set +x 00:05:57.206 ************************************ 00:05:57.206 START TEST spdkcli_tcp 00:05:57.206 ************************************ 00:05:57.206 11:32:25 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:57.206 * Looking for test storage... 00:05:57.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:57.206 11:32:25 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:57.206 11:32:25 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:57.206 11:32:25 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:57.206 11:32:25 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:57.206 11:32:25 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:57.206 11:32:25 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:57.206 11:32:25 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:57.206 11:32:25 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:57.206 11:32:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:57.206 11:32:25 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1783965 00:05:57.206 11:32:25 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1783965 00:05:57.206 11:32:25 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:57.206 11:32:25 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1783965 ']' 00:05:57.206 11:32:25 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.206 11:32:25 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.206 11:32:25 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.206 11:32:25 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.206 11:32:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:57.206 [2024-07-15 11:32:25.265568] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:05:57.206 [2024-07-15 11:32:25.265620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1783965 ] 00:05:57.206 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.466 [2024-07-15 11:32:25.334396] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.466 [2024-07-15 11:32:25.403721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.466 [2024-07-15 11:32:25.403724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.034 11:32:26 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.034 11:32:26 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:58.034 11:32:26 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:58.034 11:32:26 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1783988 00:05:58.034 11:32:26 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:58.294 [ 00:05:58.294 "bdev_malloc_delete", 00:05:58.294 "bdev_malloc_create", 00:05:58.294 "bdev_null_resize", 00:05:58.294 "bdev_null_delete", 00:05:58.294 "bdev_null_create", 00:05:58.294 "bdev_nvme_cuse_unregister", 00:05:58.294 "bdev_nvme_cuse_register", 00:05:58.294 "bdev_opal_new_user", 00:05:58.294 "bdev_opal_set_lock_state", 00:05:58.294 "bdev_opal_delete", 00:05:58.294 "bdev_opal_get_info", 00:05:58.294 "bdev_opal_create", 00:05:58.294 "bdev_nvme_opal_revert", 00:05:58.294 "bdev_nvme_opal_init", 00:05:58.294 "bdev_nvme_send_cmd", 00:05:58.294 "bdev_nvme_get_path_iostat", 00:05:58.294 "bdev_nvme_get_mdns_discovery_info", 00:05:58.294 "bdev_nvme_stop_mdns_discovery", 00:05:58.294 "bdev_nvme_start_mdns_discovery", 00:05:58.294 "bdev_nvme_set_multipath_policy", 00:05:58.294 "bdev_nvme_set_preferred_path", 00:05:58.294 "bdev_nvme_get_io_paths", 00:05:58.294 "bdev_nvme_remove_error_injection", 00:05:58.294 "bdev_nvme_add_error_injection", 00:05:58.294 "bdev_nvme_get_discovery_info", 00:05:58.294 "bdev_nvme_stop_discovery", 00:05:58.294 "bdev_nvme_start_discovery", 00:05:58.294 "bdev_nvme_get_controller_health_info", 00:05:58.294 "bdev_nvme_disable_controller", 00:05:58.294 "bdev_nvme_enable_controller", 00:05:58.294 "bdev_nvme_reset_controller", 00:05:58.294 "bdev_nvme_get_transport_statistics", 00:05:58.294 "bdev_nvme_apply_firmware", 00:05:58.294 "bdev_nvme_detach_controller", 00:05:58.294 "bdev_nvme_get_controllers", 00:05:58.294 "bdev_nvme_attach_controller", 00:05:58.294 "bdev_nvme_set_hotplug", 00:05:58.294 "bdev_nvme_set_options", 00:05:58.294 "bdev_passthru_delete", 00:05:58.294 "bdev_passthru_create", 00:05:58.294 "bdev_lvol_set_parent_bdev", 00:05:58.294 "bdev_lvol_set_parent", 00:05:58.294 "bdev_lvol_check_shallow_copy", 00:05:58.294 "bdev_lvol_start_shallow_copy", 00:05:58.294 "bdev_lvol_grow_lvstore", 00:05:58.294 "bdev_lvol_get_lvols", 00:05:58.294 "bdev_lvol_get_lvstores", 00:05:58.294 "bdev_lvol_delete", 00:05:58.294 "bdev_lvol_set_read_only", 00:05:58.294 "bdev_lvol_resize", 00:05:58.294 "bdev_lvol_decouple_parent", 00:05:58.294 "bdev_lvol_inflate", 00:05:58.294 "bdev_lvol_rename", 00:05:58.294 "bdev_lvol_clone_bdev", 00:05:58.294 "bdev_lvol_clone", 00:05:58.294 "bdev_lvol_snapshot", 00:05:58.294 "bdev_lvol_create", 00:05:58.294 "bdev_lvol_delete_lvstore", 00:05:58.294 "bdev_lvol_rename_lvstore", 00:05:58.294 "bdev_lvol_create_lvstore", 00:05:58.294 "bdev_raid_set_options", 00:05:58.294 "bdev_raid_remove_base_bdev", 00:05:58.294 "bdev_raid_add_base_bdev", 00:05:58.294 "bdev_raid_delete", 00:05:58.294 "bdev_raid_create", 00:05:58.294 "bdev_raid_get_bdevs", 00:05:58.294 "bdev_error_inject_error", 00:05:58.294 "bdev_error_delete", 00:05:58.294 "bdev_error_create", 00:05:58.294 "bdev_split_delete", 00:05:58.294 "bdev_split_create", 00:05:58.294 "bdev_delay_delete", 00:05:58.294 "bdev_delay_create", 00:05:58.294 "bdev_delay_update_latency", 00:05:58.294 "bdev_zone_block_delete", 00:05:58.294 "bdev_zone_block_create", 00:05:58.294 "blobfs_create", 00:05:58.294 "blobfs_detect", 00:05:58.294 "blobfs_set_cache_size", 00:05:58.294 "bdev_aio_delete", 00:05:58.294 "bdev_aio_rescan", 00:05:58.294 "bdev_aio_create", 00:05:58.295 "bdev_ftl_set_property", 00:05:58.295 "bdev_ftl_get_properties", 00:05:58.295 "bdev_ftl_get_stats", 00:05:58.295 "bdev_ftl_unmap", 00:05:58.295 "bdev_ftl_unload", 00:05:58.295 "bdev_ftl_delete", 00:05:58.295 "bdev_ftl_load", 00:05:58.295 "bdev_ftl_create", 00:05:58.295 "bdev_virtio_attach_controller", 00:05:58.295 "bdev_virtio_scsi_get_devices", 00:05:58.295 "bdev_virtio_detach_controller", 00:05:58.295 "bdev_virtio_blk_set_hotplug", 00:05:58.295 "bdev_iscsi_delete", 00:05:58.295 "bdev_iscsi_create", 00:05:58.295 "bdev_iscsi_set_options", 00:05:58.295 "accel_error_inject_error", 00:05:58.295 "ioat_scan_accel_module", 00:05:58.295 "dsa_scan_accel_module", 00:05:58.295 "iaa_scan_accel_module", 00:05:58.295 "vfu_virtio_create_scsi_endpoint", 00:05:58.295 "vfu_virtio_scsi_remove_target", 00:05:58.295 "vfu_virtio_scsi_add_target", 00:05:58.295 "vfu_virtio_create_blk_endpoint", 00:05:58.295 "vfu_virtio_delete_endpoint", 00:05:58.295 "keyring_file_remove_key", 00:05:58.295 "keyring_file_add_key", 00:05:58.295 "keyring_linux_set_options", 00:05:58.295 "iscsi_get_histogram", 00:05:58.295 "iscsi_enable_histogram", 00:05:58.295 "iscsi_set_options", 00:05:58.295 "iscsi_get_auth_groups", 00:05:58.295 "iscsi_auth_group_remove_secret", 00:05:58.295 "iscsi_auth_group_add_secret", 00:05:58.295 "iscsi_delete_auth_group", 00:05:58.295 "iscsi_create_auth_group", 00:05:58.295 "iscsi_set_discovery_auth", 00:05:58.295 "iscsi_get_options", 00:05:58.295 "iscsi_target_node_request_logout", 00:05:58.295 "iscsi_target_node_set_redirect", 00:05:58.295 "iscsi_target_node_set_auth", 00:05:58.295 "iscsi_target_node_add_lun", 00:05:58.295 "iscsi_get_stats", 00:05:58.295 "iscsi_get_connections", 00:05:58.295 "iscsi_portal_group_set_auth", 00:05:58.295 "iscsi_start_portal_group", 00:05:58.295 "iscsi_delete_portal_group", 00:05:58.295 "iscsi_create_portal_group", 00:05:58.295 "iscsi_get_portal_groups", 00:05:58.295 "iscsi_delete_target_node", 00:05:58.295 "iscsi_target_node_remove_pg_ig_maps", 00:05:58.295 "iscsi_target_node_add_pg_ig_maps", 00:05:58.295 "iscsi_create_target_node", 00:05:58.295 "iscsi_get_target_nodes", 00:05:58.295 "iscsi_delete_initiator_group", 00:05:58.295 "iscsi_initiator_group_remove_initiators", 00:05:58.295 "iscsi_initiator_group_add_initiators", 00:05:58.295 "iscsi_create_initiator_group", 00:05:58.295 "iscsi_get_initiator_groups", 00:05:58.295 "nvmf_set_crdt", 00:05:58.295 "nvmf_set_config", 00:05:58.295 "nvmf_set_max_subsystems", 00:05:58.295 "nvmf_stop_mdns_prr", 00:05:58.295 "nvmf_publish_mdns_prr", 00:05:58.295 "nvmf_subsystem_get_listeners", 00:05:58.295 "nvmf_subsystem_get_qpairs", 00:05:58.295 "nvmf_subsystem_get_controllers", 00:05:58.295 "nvmf_get_stats", 00:05:58.295 "nvmf_get_transports", 00:05:58.295 "nvmf_create_transport", 00:05:58.295 "nvmf_get_targets", 00:05:58.295 "nvmf_delete_target", 00:05:58.295 "nvmf_create_target", 00:05:58.295 "nvmf_subsystem_allow_any_host", 00:05:58.295 "nvmf_subsystem_remove_host", 00:05:58.295 "nvmf_subsystem_add_host", 00:05:58.295 "nvmf_ns_remove_host", 00:05:58.295 "nvmf_ns_add_host", 00:05:58.295 "nvmf_subsystem_remove_ns", 00:05:58.295 "nvmf_subsystem_add_ns", 00:05:58.295 "nvmf_subsystem_listener_set_ana_state", 00:05:58.295 "nvmf_discovery_get_referrals", 00:05:58.295 "nvmf_discovery_remove_referral", 00:05:58.295 "nvmf_discovery_add_referral", 00:05:58.295 "nvmf_subsystem_remove_listener", 00:05:58.295 "nvmf_subsystem_add_listener", 00:05:58.295 "nvmf_delete_subsystem", 00:05:58.295 "nvmf_create_subsystem", 00:05:58.295 "nvmf_get_subsystems", 00:05:58.295 "env_dpdk_get_mem_stats", 00:05:58.295 "nbd_get_disks", 00:05:58.295 "nbd_stop_disk", 00:05:58.295 "nbd_start_disk", 00:05:58.295 "ublk_recover_disk", 00:05:58.295 "ublk_get_disks", 00:05:58.295 "ublk_stop_disk", 00:05:58.295 "ublk_start_disk", 00:05:58.295 "ublk_destroy_target", 00:05:58.295 "ublk_create_target", 00:05:58.295 "virtio_blk_create_transport", 00:05:58.295 "virtio_blk_get_transports", 00:05:58.295 "vhost_controller_set_coalescing", 00:05:58.295 "vhost_get_controllers", 00:05:58.295 "vhost_delete_controller", 00:05:58.295 "vhost_create_blk_controller", 00:05:58.295 "vhost_scsi_controller_remove_target", 00:05:58.295 "vhost_scsi_controller_add_target", 00:05:58.295 "vhost_start_scsi_controller", 00:05:58.295 "vhost_create_scsi_controller", 00:05:58.295 "thread_set_cpumask", 00:05:58.295 "framework_get_governor", 00:05:58.295 "framework_get_scheduler", 00:05:58.295 "framework_set_scheduler", 00:05:58.295 "framework_get_reactors", 00:05:58.295 "thread_get_io_channels", 00:05:58.295 "thread_get_pollers", 00:05:58.295 "thread_get_stats", 00:05:58.295 "framework_monitor_context_switch", 00:05:58.295 "spdk_kill_instance", 00:05:58.295 "log_enable_timestamps", 00:05:58.295 "log_get_flags", 00:05:58.295 "log_clear_flag", 00:05:58.295 "log_set_flag", 00:05:58.295 "log_get_level", 00:05:58.295 "log_set_level", 00:05:58.295 "log_get_print_level", 00:05:58.295 "log_set_print_level", 00:05:58.295 "framework_enable_cpumask_locks", 00:05:58.295 "framework_disable_cpumask_locks", 00:05:58.295 "framework_wait_init", 00:05:58.295 "framework_start_init", 00:05:58.295 "scsi_get_devices", 00:05:58.295 "bdev_get_histogram", 00:05:58.295 "bdev_enable_histogram", 00:05:58.295 "bdev_set_qos_limit", 00:05:58.295 "bdev_set_qd_sampling_period", 00:05:58.295 "bdev_get_bdevs", 00:05:58.295 "bdev_reset_iostat", 00:05:58.295 "bdev_get_iostat", 00:05:58.295 "bdev_examine", 00:05:58.295 "bdev_wait_for_examine", 00:05:58.295 "bdev_set_options", 00:05:58.295 "notify_get_notifications", 00:05:58.295 "notify_get_types", 00:05:58.295 "accel_get_stats", 00:05:58.295 "accel_set_options", 00:05:58.295 "accel_set_driver", 00:05:58.295 "accel_crypto_key_destroy", 00:05:58.295 "accel_crypto_keys_get", 00:05:58.295 "accel_crypto_key_create", 00:05:58.295 "accel_assign_opc", 00:05:58.295 "accel_get_module_info", 00:05:58.295 "accel_get_opc_assignments", 00:05:58.295 "vmd_rescan", 00:05:58.295 "vmd_remove_device", 00:05:58.295 "vmd_enable", 00:05:58.295 "sock_get_default_impl", 00:05:58.295 "sock_set_default_impl", 00:05:58.295 "sock_impl_set_options", 00:05:58.295 "sock_impl_get_options", 00:05:58.295 "iobuf_get_stats", 00:05:58.295 "iobuf_set_options", 00:05:58.295 "keyring_get_keys", 00:05:58.295 "framework_get_pci_devices", 00:05:58.295 "framework_get_config", 00:05:58.295 "framework_get_subsystems", 00:05:58.295 "vfu_tgt_set_base_path", 00:05:58.295 "trace_get_info", 00:05:58.295 "trace_get_tpoint_group_mask", 00:05:58.295 "trace_disable_tpoint_group", 00:05:58.295 "trace_enable_tpoint_group", 00:05:58.295 "trace_clear_tpoint_mask", 00:05:58.295 "trace_set_tpoint_mask", 00:05:58.295 "spdk_get_version", 00:05:58.295 "rpc_get_methods" 00:05:58.295 ] 00:05:58.295 11:32:26 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:58.295 11:32:26 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:58.295 11:32:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.295 11:32:26 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:58.295 11:32:26 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1783965 00:05:58.295 11:32:26 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1783965 ']' 00:05:58.295 11:32:26 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1783965 00:05:58.295 11:32:26 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:58.295 11:32:26 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.295 11:32:26 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1783965 00:05:58.295 11:32:26 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.295 11:32:26 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.295 11:32:26 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1783965' 00:05:58.295 killing process with pid 1783965 00:05:58.295 11:32:26 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1783965 00:05:58.295 11:32:26 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1783965 00:05:58.555 00:05:58.555 real 0m1.534s 00:05:58.555 user 0m2.808s 00:05:58.555 sys 0m0.482s 00:05:58.555 11:32:26 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.555 11:32:26 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.555 ************************************ 00:05:58.555 END TEST spdkcli_tcp 00:05:58.555 ************************************ 00:05:58.815 11:32:26 -- common/autotest_common.sh@1142 -- # return 0 00:05:58.815 11:32:26 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:58.815 11:32:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.815 11:32:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.815 11:32:26 -- common/autotest_common.sh@10 -- # set +x 00:05:58.815 ************************************ 00:05:58.815 START TEST dpdk_mem_utility 00:05:58.815 ************************************ 00:05:58.815 11:32:26 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:58.815 * Looking for test storage... 00:05:58.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:58.815 11:32:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:58.815 11:32:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1784293 00:05:58.815 11:32:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1784293 00:05:58.815 11:32:26 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1784293 ']' 00:05:58.815 11:32:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:58.815 11:32:26 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.815 11:32:26 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.815 11:32:26 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.815 11:32:26 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.815 11:32:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:58.815 [2024-07-15 11:32:26.873226] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:05:58.815 [2024-07-15 11:32:26.873277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1784293 ] 00:05:58.815 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.075 [2024-07-15 11:32:26.943240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.075 [2024-07-15 11:32:27.018150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.653 11:32:27 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.653 11:32:27 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:59.653 11:32:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:59.653 11:32:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:59.653 11:32:27 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.653 11:32:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:59.653 { 00:05:59.653 "filename": "/tmp/spdk_mem_dump.txt" 00:05:59.653 } 00:05:59.653 11:32:27 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.653 11:32:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:59.653 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:59.653 1 heaps totaling size 814.000000 MiB 00:05:59.653 size: 814.000000 MiB heap id: 0 00:05:59.653 end heaps---------- 00:05:59.653 8 mempools totaling size 598.116089 MiB 00:05:59.653 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:59.653 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:59.653 size: 84.521057 MiB name: bdev_io_1784293 00:05:59.653 size: 51.011292 MiB name: evtpool_1784293 00:05:59.653 size: 50.003479 MiB name: msgpool_1784293 00:05:59.653 size: 21.763794 MiB name: PDU_Pool 00:05:59.653 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:59.653 size: 0.026123 MiB name: Session_Pool 00:05:59.653 end mempools------- 00:05:59.653 6 memzones totaling size 4.142822 MiB 00:05:59.654 size: 1.000366 MiB name: RG_ring_0_1784293 00:05:59.654 size: 1.000366 MiB name: RG_ring_1_1784293 00:05:59.654 size: 1.000366 MiB name: RG_ring_4_1784293 00:05:59.654 size: 1.000366 MiB name: RG_ring_5_1784293 00:05:59.654 size: 0.125366 MiB name: RG_ring_2_1784293 00:05:59.654 size: 0.015991 MiB name: RG_ring_3_1784293 00:05:59.654 end memzones------- 00:05:59.654 11:32:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:59.913 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:59.913 list of free elements. size: 12.519348 MiB 00:05:59.913 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:59.913 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:59.913 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:59.913 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:59.913 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:59.913 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:59.913 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:59.913 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:59.913 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:59.913 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:59.913 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:59.913 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:59.913 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:59.913 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:59.913 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:59.913 list of standard malloc elements. size: 199.218079 MiB 00:05:59.913 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:59.913 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:59.913 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:59.913 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:59.913 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:59.913 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:59.913 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:59.913 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:59.913 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:59.913 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:59.913 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:59.913 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:59.913 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:59.913 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:59.913 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:59.913 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:59.913 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:59.913 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:59.913 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:59.913 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:59.913 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:59.913 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:59.913 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:59.913 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:59.913 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:59.913 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:59.913 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:59.914 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:59.914 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:59.914 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:59.914 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:59.914 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:59.914 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:59.914 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:59.914 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:59.914 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:59.914 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:59.914 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:59.914 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:59.914 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:59.914 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:59.914 list of memzone associated elements. size: 602.262573 MiB 00:05:59.914 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:59.914 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:59.914 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:59.914 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:59.914 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:59.914 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1784293_0 00:05:59.914 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:59.914 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1784293_0 00:05:59.914 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:59.914 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1784293_0 00:05:59.914 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:59.914 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:59.914 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:59.914 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:59.914 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:59.914 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1784293 00:05:59.914 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:59.914 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1784293 00:05:59.914 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:59.914 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1784293 00:05:59.914 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:59.914 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:59.914 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:59.914 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:59.914 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:59.914 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:59.914 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:59.914 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:59.914 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:59.914 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1784293 00:05:59.914 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:59.914 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1784293 00:05:59.914 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:59.914 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1784293 00:05:59.914 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:59.914 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1784293 00:05:59.914 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:59.914 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1784293 00:05:59.914 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:59.914 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:59.914 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:59.914 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:59.914 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:59.914 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:59.914 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:59.914 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1784293 00:05:59.914 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:59.914 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:59.914 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:59.914 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:59.914 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:59.914 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1784293 00:05:59.914 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:59.914 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:59.914 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:59.914 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1784293 00:05:59.914 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:59.914 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1784293 00:05:59.914 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:59.914 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:59.914 11:32:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:59.914 11:32:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1784293 00:05:59.914 11:32:27 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1784293 ']' 00:05:59.914 11:32:27 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1784293 00:05:59.914 11:32:27 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:59.914 11:32:27 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:59.914 11:32:27 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1784293 00:05:59.914 11:32:27 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:59.914 11:32:27 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:59.914 11:32:27 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1784293' 00:05:59.914 killing process with pid 1784293 00:05:59.914 11:32:27 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1784293 00:05:59.914 11:32:27 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1784293 00:06:00.173 00:06:00.173 real 0m1.399s 00:06:00.173 user 0m1.437s 00:06:00.173 sys 0m0.423s 00:06:00.173 11:32:28 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.173 11:32:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:00.173 ************************************ 00:06:00.173 END TEST dpdk_mem_utility 00:06:00.173 ************************************ 00:06:00.173 11:32:28 -- common/autotest_common.sh@1142 -- # return 0 00:06:00.173 11:32:28 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:00.173 11:32:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.173 11:32:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.173 11:32:28 -- common/autotest_common.sh@10 -- # set +x 00:06:00.173 ************************************ 00:06:00.173 START TEST event 00:06:00.173 ************************************ 00:06:00.173 11:32:28 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:00.433 * Looking for test storage... 00:06:00.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:00.433 11:32:28 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:00.433 11:32:28 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:00.433 11:32:28 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:00.433 11:32:28 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:00.433 11:32:28 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.433 11:32:28 event -- common/autotest_common.sh@10 -- # set +x 00:06:00.433 ************************************ 00:06:00.433 START TEST event_perf 00:06:00.433 ************************************ 00:06:00.433 11:32:28 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:00.433 Running I/O for 1 seconds...[2024-07-15 11:32:28.376736] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:00.433 [2024-07-15 11:32:28.376826] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1784624 ] 00:06:00.433 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.433 [2024-07-15 11:32:28.448050] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.433 [2024-07-15 11:32:28.519575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.433 [2024-07-15 11:32:28.519672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.433 [2024-07-15 11:32:28.519756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.433 [2024-07-15 11:32:28.519758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.826 Running I/O for 1 seconds... 00:06:01.826 lcore 0: 221851 00:06:01.826 lcore 1: 221851 00:06:01.826 lcore 2: 221853 00:06:01.826 lcore 3: 221852 00:06:01.826 done. 00:06:01.826 00:06:01.826 real 0m1.231s 00:06:01.826 user 0m4.134s 00:06:01.826 sys 0m0.095s 00:06:01.826 11:32:29 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.826 11:32:29 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:01.826 ************************************ 00:06:01.826 END TEST event_perf 00:06:01.826 ************************************ 00:06:01.826 11:32:29 event -- common/autotest_common.sh@1142 -- # return 0 00:06:01.826 11:32:29 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:01.826 11:32:29 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:01.826 11:32:29 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.826 11:32:29 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.826 ************************************ 00:06:01.826 START TEST event_reactor 00:06:01.826 ************************************ 00:06:01.826 11:32:29 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:01.826 [2024-07-15 11:32:29.689557] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:01.826 [2024-07-15 11:32:29.689637] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1784905 ] 00:06:01.826 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.826 [2024-07-15 11:32:29.761186] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.826 [2024-07-15 11:32:29.829509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.218 test_start 00:06:03.218 oneshot 00:06:03.218 tick 100 00:06:03.218 tick 100 00:06:03.218 tick 250 00:06:03.218 tick 100 00:06:03.218 tick 100 00:06:03.218 tick 100 00:06:03.218 tick 250 00:06:03.218 tick 500 00:06:03.218 tick 100 00:06:03.218 tick 100 00:06:03.218 tick 250 00:06:03.218 tick 100 00:06:03.218 tick 100 00:06:03.218 test_end 00:06:03.218 00:06:03.218 real 0m1.232s 00:06:03.218 user 0m1.135s 00:06:03.218 sys 0m0.094s 00:06:03.218 11:32:30 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.218 11:32:30 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:03.218 ************************************ 00:06:03.218 END TEST event_reactor 00:06:03.218 ************************************ 00:06:03.218 11:32:30 event -- common/autotest_common.sh@1142 -- # return 0 00:06:03.218 11:32:30 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:03.218 11:32:30 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:03.218 11:32:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.218 11:32:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.218 ************************************ 00:06:03.218 START TEST event_reactor_perf 00:06:03.218 ************************************ 00:06:03.218 11:32:30 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:03.218 [2024-07-15 11:32:31.003795] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:03.218 [2024-07-15 11:32:31.003884] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1785079 ] 00:06:03.218 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.218 [2024-07-15 11:32:31.077714] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.218 [2024-07-15 11:32:31.148993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.156 test_start 00:06:04.156 test_end 00:06:04.156 Performance: 523744 events per second 00:06:04.156 00:06:04.156 real 0m1.233s 00:06:04.156 user 0m1.139s 00:06:04.156 sys 0m0.089s 00:06:04.156 11:32:32 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.156 11:32:32 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:04.156 ************************************ 00:06:04.156 END TEST event_reactor_perf 00:06:04.156 ************************************ 00:06:04.156 11:32:32 event -- common/autotest_common.sh@1142 -- # return 0 00:06:04.156 11:32:32 event -- event/event.sh@49 -- # uname -s 00:06:04.415 11:32:32 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:04.415 11:32:32 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:04.415 11:32:32 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.415 11:32:32 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.415 11:32:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.415 ************************************ 00:06:04.415 START TEST event_scheduler 00:06:04.416 ************************************ 00:06:04.416 11:32:32 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:04.416 * Looking for test storage... 00:06:04.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:04.416 11:32:32 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:04.416 11:32:32 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1785337 00:06:04.416 11:32:32 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:04.416 11:32:32 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.416 11:32:32 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1785337 00:06:04.416 11:32:32 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1785337 ']' 00:06:04.416 11:32:32 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.416 11:32:32 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.416 11:32:32 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.416 11:32:32 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.416 11:32:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.416 [2024-07-15 11:32:32.457676] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:04.416 [2024-07-15 11:32:32.457734] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1785337 ] 00:06:04.416 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.675 [2024-07-15 11:32:32.524629] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.675 [2024-07-15 11:32:32.603105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.675 [2024-07-15 11:32:32.603186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.675 [2024-07-15 11:32:32.603273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:04.675 [2024-07-15 11:32:32.603274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.256 11:32:33 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.256 11:32:33 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:05.256 11:32:33 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:05.256 11:32:33 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.256 11:32:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.256 [2024-07-15 11:32:33.273582] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:05.256 [2024-07-15 11:32:33.273604] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:05.256 [2024-07-15 11:32:33.273614] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:05.256 [2024-07-15 11:32:33.273622] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:05.256 [2024-07-15 11:32:33.273629] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:05.256 11:32:33 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.256 11:32:33 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:05.256 11:32:33 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.256 11:32:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.256 [2024-07-15 11:32:33.344615] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:05.256 11:32:33 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.256 11:32:33 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:05.256 11:32:33 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:05.256 11:32:33 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.256 11:32:33 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:05.519 ************************************ 00:06:05.519 START TEST scheduler_create_thread 00:06:05.519 ************************************ 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.519 2 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.519 3 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.519 4 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.519 5 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.519 6 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.519 7 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.519 8 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.519 9 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.519 10 00:06:05.519 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.520 11:32:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:05.520 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.520 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.520 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.520 11:32:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:05.520 11:32:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:05.520 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.520 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.087 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.087 11:32:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:06.087 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.087 11:32:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.467 11:32:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:07.467 11:32:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:07.467 11:32:35 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:07.467 11:32:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:07.467 11:32:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.404 11:32:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:08.404 00:06:08.404 real 0m3.101s 00:06:08.404 user 0m0.023s 00:06:08.404 sys 0m0.008s 00:06:08.404 11:32:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.404 11:32:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:08.404 ************************************ 00:06:08.404 END TEST scheduler_create_thread 00:06:08.404 ************************************ 00:06:08.663 11:32:36 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:08.663 11:32:36 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:08.663 11:32:36 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1785337 00:06:08.663 11:32:36 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1785337 ']' 00:06:08.663 11:32:36 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1785337 00:06:08.663 11:32:36 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:08.663 11:32:36 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:08.663 11:32:36 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1785337 00:06:08.663 11:32:36 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:08.663 11:32:36 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:08.663 11:32:36 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1785337' 00:06:08.663 killing process with pid 1785337 00:06:08.663 11:32:36 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1785337 00:06:08.663 11:32:36 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1785337 00:06:08.923 [2024-07-15 11:32:36.867754] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:09.182 00:06:09.183 real 0m4.775s 00:06:09.183 user 0m9.199s 00:06:09.183 sys 0m0.457s 00:06:09.183 11:32:37 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.183 11:32:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.183 ************************************ 00:06:09.183 END TEST event_scheduler 00:06:09.183 ************************************ 00:06:09.183 11:32:37 event -- common/autotest_common.sh@1142 -- # return 0 00:06:09.183 11:32:37 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:09.183 11:32:37 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:09.183 11:32:37 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.183 11:32:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.183 11:32:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:09.183 ************************************ 00:06:09.183 START TEST app_repeat 00:06:09.183 ************************************ 00:06:09.183 11:32:37 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:09.183 11:32:37 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.183 11:32:37 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.183 11:32:37 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:09.183 11:32:37 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:09.183 11:32:37 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:09.183 11:32:37 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:09.183 11:32:37 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:09.183 11:32:37 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1786245 00:06:09.183 11:32:37 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.183 11:32:37 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:09.183 11:32:37 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1786245' 00:06:09.183 Process app_repeat pid: 1786245 00:06:09.183 11:32:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:09.183 11:32:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:09.183 spdk_app_start Round 0 00:06:09.183 11:32:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1786245 /var/tmp/spdk-nbd.sock 00:06:09.183 11:32:37 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1786245 ']' 00:06:09.183 11:32:37 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.183 11:32:37 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.183 11:32:37 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.183 11:32:37 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.183 11:32:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.183 [2024-07-15 11:32:37.203705] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:09.183 [2024-07-15 11:32:37.203763] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1786245 ] 00:06:09.183 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.183 [2024-07-15 11:32:37.273571] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.442 [2024-07-15 11:32:37.350735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.442 [2024-07-15 11:32:37.350737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.010 11:32:38 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.010 11:32:38 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:10.010 11:32:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.269 Malloc0 00:06:10.269 11:32:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.528 Malloc1 00:06:10.528 11:32:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.528 11:32:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.528 11:32:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.528 11:32:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.528 11:32:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.528 11:32:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.528 11:32:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.528 11:32:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.528 11:32:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.528 11:32:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.528 11:32:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.528 11:32:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.528 11:32:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:10.528 11:32:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.528 11:32:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.528 11:32:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.528 /dev/nbd0 00:06:10.528 11:32:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.528 11:32:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.528 11:32:38 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:10.528 11:32:38 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:10.528 11:32:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:10.528 11:32:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:10.528 11:32:38 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:10.528 11:32:38 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:10.528 11:32:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:10.528 11:32:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:10.528 11:32:38 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.528 1+0 records in 00:06:10.528 1+0 records out 00:06:10.528 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000268111 s, 15.3 MB/s 00:06:10.528 11:32:38 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.528 11:32:38 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:10.528 11:32:38 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.528 11:32:38 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:10.528 11:32:38 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:10.528 11:32:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.528 11:32:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.528 11:32:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.787 /dev/nbd1 00:06:10.787 11:32:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.787 11:32:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.787 11:32:38 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:10.787 11:32:38 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:10.787 11:32:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:10.787 11:32:38 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:10.787 11:32:38 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:10.787 11:32:38 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:10.787 11:32:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:10.787 11:32:38 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:10.787 11:32:38 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.787 1+0 records in 00:06:10.787 1+0 records out 00:06:10.787 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218054 s, 18.8 MB/s 00:06:10.787 11:32:38 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.787 11:32:38 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:10.787 11:32:38 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.787 11:32:38 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:10.787 11:32:38 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:10.787 11:32:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.787 11:32:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.787 11:32:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.787 11:32:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.787 11:32:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.047 11:32:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.047 { 00:06:11.047 "nbd_device": "/dev/nbd0", 00:06:11.047 "bdev_name": "Malloc0" 00:06:11.047 }, 00:06:11.047 { 00:06:11.047 "nbd_device": "/dev/nbd1", 00:06:11.047 "bdev_name": "Malloc1" 00:06:11.047 } 00:06:11.047 ]' 00:06:11.047 11:32:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.047 { 00:06:11.047 "nbd_device": "/dev/nbd0", 00:06:11.047 "bdev_name": "Malloc0" 00:06:11.047 }, 00:06:11.047 { 00:06:11.047 "nbd_device": "/dev/nbd1", 00:06:11.047 "bdev_name": "Malloc1" 00:06:11.047 } 00:06:11.047 ]' 00:06:11.047 11:32:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.047 /dev/nbd1' 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.047 /dev/nbd1' 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.047 256+0 records in 00:06:11.047 256+0 records out 00:06:11.047 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011415 s, 91.9 MB/s 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.047 256+0 records in 00:06:11.047 256+0 records out 00:06:11.047 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198622 s, 52.8 MB/s 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.047 256+0 records in 00:06:11.047 256+0 records out 00:06:11.047 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154364 s, 67.9 MB/s 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.047 11:32:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.306 11:32:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.306 11:32:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.306 11:32:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.306 11:32:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.306 11:32:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.306 11:32:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.306 11:32:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.306 11:32:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.306 11:32:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.306 11:32:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.566 11:32:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.566 11:32:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.566 11:32:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.566 11:32:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.566 11:32:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.566 11:32:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.566 11:32:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.566 11:32:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.566 11:32:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.566 11:32:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.566 11:32:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.826 11:32:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.826 11:32:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.826 11:32:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.826 11:32:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.826 11:32:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.826 11:32:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.826 11:32:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:11.826 11:32:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.827 11:32:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.827 11:32:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.827 11:32:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.827 11:32:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.827 11:32:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.086 11:32:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.086 [2024-07-15 11:32:40.122384] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.086 [2024-07-15 11:32:40.186228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.086 [2024-07-15 11:32:40.186230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.344 [2024-07-15 11:32:40.227046] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.344 [2024-07-15 11:32:40.227088] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:14.879 11:32:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:14.879 11:32:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:14.879 spdk_app_start Round 1 00:06:14.879 11:32:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1786245 /var/tmp/spdk-nbd.sock 00:06:14.879 11:32:42 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1786245 ']' 00:06:14.879 11:32:42 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.879 11:32:42 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:14.879 11:32:42 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.879 11:32:42 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:14.879 11:32:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.138 11:32:43 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.138 11:32:43 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:15.138 11:32:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.396 Malloc0 00:06:15.396 11:32:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.396 Malloc1 00:06:15.396 11:32:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.396 11:32:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.396 11:32:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.396 11:32:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.396 11:32:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.396 11:32:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.396 11:32:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.396 11:32:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.396 11:32:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.396 11:32:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.396 11:32:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.396 11:32:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.396 11:32:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:15.396 11:32:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.396 11:32:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.396 11:32:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:15.654 /dev/nbd0 00:06:15.654 11:32:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:15.654 11:32:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:15.654 11:32:43 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:15.654 11:32:43 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:15.654 11:32:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:15.654 11:32:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:15.654 11:32:43 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:15.654 11:32:43 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:15.654 11:32:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:15.654 11:32:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:15.654 11:32:43 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.654 1+0 records in 00:06:15.654 1+0 records out 00:06:15.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000159997 s, 25.6 MB/s 00:06:15.654 11:32:43 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:15.654 11:32:43 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:15.654 11:32:43 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:15.654 11:32:43 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:15.654 11:32:43 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:15.654 11:32:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.654 11:32:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.654 11:32:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.913 /dev/nbd1 00:06:15.913 11:32:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.913 11:32:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.913 11:32:43 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:15.913 11:32:43 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:15.913 11:32:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:15.913 11:32:43 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:15.913 11:32:43 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:15.913 11:32:43 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:15.913 11:32:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:15.913 11:32:43 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:15.913 11:32:43 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.913 1+0 records in 00:06:15.913 1+0 records out 00:06:15.913 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269435 s, 15.2 MB/s 00:06:15.913 11:32:43 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:15.913 11:32:43 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:15.913 11:32:43 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:15.913 11:32:43 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:15.913 11:32:43 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:15.913 11:32:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.913 11:32:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.913 11:32:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.913 11:32:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.913 11:32:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.172 { 00:06:16.172 "nbd_device": "/dev/nbd0", 00:06:16.172 "bdev_name": "Malloc0" 00:06:16.172 }, 00:06:16.172 { 00:06:16.172 "nbd_device": "/dev/nbd1", 00:06:16.172 "bdev_name": "Malloc1" 00:06:16.172 } 00:06:16.172 ]' 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.172 { 00:06:16.172 "nbd_device": "/dev/nbd0", 00:06:16.172 "bdev_name": "Malloc0" 00:06:16.172 }, 00:06:16.172 { 00:06:16.172 "nbd_device": "/dev/nbd1", 00:06:16.172 "bdev_name": "Malloc1" 00:06:16.172 } 00:06:16.172 ]' 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.172 /dev/nbd1' 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.172 /dev/nbd1' 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.172 256+0 records in 00:06:16.172 256+0 records out 00:06:16.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114242 s, 91.8 MB/s 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.172 256+0 records in 00:06:16.172 256+0 records out 00:06:16.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140485 s, 74.6 MB/s 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.172 256+0 records in 00:06:16.172 256+0 records out 00:06:16.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021258 s, 49.3 MB/s 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:16.172 11:32:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.173 11:32:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.173 11:32:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.173 11:32:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.173 11:32:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.173 11:32:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.173 11:32:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.173 11:32:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:16.173 11:32:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.173 11:32:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.173 11:32:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.173 11:32:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.173 11:32:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.173 11:32:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.173 11:32:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.173 11:32:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:16.173 11:32:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.173 11:32:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.431 11:32:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.431 11:32:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.431 11:32:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.431 11:32:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.431 11:32:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.431 11:32:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.431 11:32:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.431 11:32:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.431 11:32:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.431 11:32:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.690 11:32:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.690 11:32:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.690 11:32:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.690 11:32:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.690 11:32:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.690 11:32:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.690 11:32:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.690 11:32:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.690 11:32:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.690 11:32:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.690 11:32:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.690 11:32:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.690 11:32:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.690 11:32:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.950 11:32:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.950 11:32:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.950 11:32:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.950 11:32:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:16.950 11:32:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.950 11:32:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.950 11:32:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.950 11:32:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.950 11:32:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.950 11:32:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:16.950 11:32:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:17.209 [2024-07-15 11:32:45.189443] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.209 [2024-07-15 11:32:45.252599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.209 [2024-07-15 11:32:45.252602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.209 [2024-07-15 11:32:45.294336] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:17.209 [2024-07-15 11:32:45.294378] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.499 11:32:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:20.499 11:32:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:20.499 spdk_app_start Round 2 00:06:20.499 11:32:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1786245 /var/tmp/spdk-nbd.sock 00:06:20.499 11:32:48 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1786245 ']' 00:06:20.499 11:32:48 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.499 11:32:48 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.499 11:32:48 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.499 11:32:48 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.499 11:32:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.499 11:32:48 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.499 11:32:48 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:20.499 11:32:48 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.499 Malloc0 00:06:20.499 11:32:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:20.499 Malloc1 00:06:20.499 11:32:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.499 11:32:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.499 11:32:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.499 11:32:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.499 11:32:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.499 11:32:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.499 11:32:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:20.499 11:32:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.499 11:32:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:20.499 11:32:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.499 11:32:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:20.499 11:32:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.499 11:32:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:20.499 11:32:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.499 11:32:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.499 11:32:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:20.759 /dev/nbd0 00:06:20.759 11:32:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:20.759 11:32:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:20.759 11:32:48 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:20.759 11:32:48 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:20.759 11:32:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:20.759 11:32:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:20.759 11:32:48 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:20.759 11:32:48 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:20.759 11:32:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:20.759 11:32:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:20.759 11:32:48 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:20.759 1+0 records in 00:06:20.759 1+0 records out 00:06:20.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258159 s, 15.9 MB/s 00:06:20.759 11:32:48 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.759 11:32:48 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:20.759 11:32:48 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:20.759 11:32:48 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:20.759 11:32:48 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:20.759 11:32:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:20.759 11:32:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:20.759 11:32:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:21.018 /dev/nbd1 00:06:21.018 11:32:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.018 11:32:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.018 11:32:48 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:21.018 11:32:48 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:21.018 11:32:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:21.018 11:32:48 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:21.018 11:32:48 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:21.018 11:32:48 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:21.018 11:32:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:21.018 11:32:48 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:21.018 11:32:48 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:21.018 1+0 records in 00:06:21.018 1+0 records out 00:06:21.018 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226621 s, 18.1 MB/s 00:06:21.018 11:32:48 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:21.018 11:32:48 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:21.018 11:32:48 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:21.018 11:32:48 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:21.018 11:32:48 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:21.018 11:32:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.018 11:32:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:21.018 11:32:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.018 11:32:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.018 11:32:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.277 11:32:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:21.277 { 00:06:21.277 "nbd_device": "/dev/nbd0", 00:06:21.277 "bdev_name": "Malloc0" 00:06:21.277 }, 00:06:21.277 { 00:06:21.277 "nbd_device": "/dev/nbd1", 00:06:21.277 "bdev_name": "Malloc1" 00:06:21.277 } 00:06:21.277 ]' 00:06:21.277 11:32:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:21.277 { 00:06:21.277 "nbd_device": "/dev/nbd0", 00:06:21.277 "bdev_name": "Malloc0" 00:06:21.277 }, 00:06:21.277 { 00:06:21.277 "nbd_device": "/dev/nbd1", 00:06:21.277 "bdev_name": "Malloc1" 00:06:21.277 } 00:06:21.278 ]' 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:21.278 /dev/nbd1' 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:21.278 /dev/nbd1' 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:21.278 256+0 records in 00:06:21.278 256+0 records out 00:06:21.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00408277 s, 257 MB/s 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:21.278 256+0 records in 00:06:21.278 256+0 records out 00:06:21.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196579 s, 53.3 MB/s 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:21.278 256+0 records in 00:06:21.278 256+0 records out 00:06:21.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149659 s, 70.1 MB/s 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.278 11:32:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:21.537 11:32:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:21.537 11:32:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:21.537 11:32:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:21.537 11:32:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.537 11:32:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.537 11:32:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:21.537 11:32:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.537 11:32:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.537 11:32:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:21.537 11:32:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:21.537 11:32:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:21.537 11:32:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:21.537 11:32:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:21.537 11:32:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:21.537 11:32:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:21.537 11:32:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:21.796 11:32:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:21.796 11:32:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:21.796 11:32:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:21.796 11:32:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.796 11:32:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:21.796 11:32:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:21.796 11:32:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:21.796 11:32:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:21.796 11:32:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:21.796 11:32:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:21.796 11:32:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:21.796 11:32:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:21.796 11:32:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:21.796 11:32:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:21.796 11:32:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:21.796 11:32:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:21.796 11:32:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:21.796 11:32:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:22.054 11:32:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:22.313 [2024-07-15 11:32:50.252026] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.313 [2024-07-15 11:32:50.316268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.313 [2024-07-15 11:32:50.316271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.313 [2024-07-15 11:32:50.356952] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:22.313 [2024-07-15 11:32:50.356993] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:25.625 11:32:53 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1786245 /var/tmp/spdk-nbd.sock 00:06:25.625 11:32:53 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1786245 ']' 00:06:25.625 11:32:53 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:25.625 11:32:53 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.625 11:32:53 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:25.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:25.625 11:32:53 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.625 11:32:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.625 11:32:53 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.625 11:32:53 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:25.625 11:32:53 event.app_repeat -- event/event.sh@39 -- # killprocess 1786245 00:06:25.625 11:32:53 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1786245 ']' 00:06:25.625 11:32:53 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1786245 00:06:25.625 11:32:53 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:25.625 11:32:53 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.625 11:32:53 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1786245 00:06:25.625 11:32:53 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.625 11:32:53 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.625 11:32:53 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1786245' 00:06:25.625 killing process with pid 1786245 00:06:25.625 11:32:53 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1786245 00:06:25.625 11:32:53 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1786245 00:06:25.625 spdk_app_start is called in Round 0. 00:06:25.625 Shutdown signal received, stop current app iteration 00:06:25.625 Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 reinitialization... 00:06:25.625 spdk_app_start is called in Round 1. 00:06:25.625 Shutdown signal received, stop current app iteration 00:06:25.625 Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 reinitialization... 00:06:25.625 spdk_app_start is called in Round 2. 00:06:25.625 Shutdown signal received, stop current app iteration 00:06:25.625 Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 reinitialization... 00:06:25.625 spdk_app_start is called in Round 3. 00:06:25.625 Shutdown signal received, stop current app iteration 00:06:25.625 11:32:53 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:25.625 11:32:53 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:25.625 00:06:25.625 real 0m16.277s 00:06:25.625 user 0m34.656s 00:06:25.625 sys 0m2.983s 00:06:25.625 11:32:53 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.625 11:32:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:25.625 ************************************ 00:06:25.625 END TEST app_repeat 00:06:25.625 ************************************ 00:06:25.625 11:32:53 event -- common/autotest_common.sh@1142 -- # return 0 00:06:25.625 11:32:53 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:25.625 11:32:53 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:25.625 11:32:53 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.625 11:32:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.625 11:32:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:25.625 ************************************ 00:06:25.626 START TEST cpu_locks 00:06:25.626 ************************************ 00:06:25.626 11:32:53 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:25.626 * Looking for test storage... 00:06:25.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:25.626 11:32:53 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:25.626 11:32:53 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:25.626 11:32:53 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:25.626 11:32:53 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:25.626 11:32:53 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.626 11:32:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.626 11:32:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.626 ************************************ 00:06:25.626 START TEST default_locks 00:06:25.626 ************************************ 00:06:25.626 11:32:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:25.626 11:32:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1789229 00:06:25.626 11:32:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1789229 00:06:25.626 11:32:53 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1789229 ']' 00:06:25.626 11:32:53 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.626 11:32:53 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.626 11:32:53 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.626 11:32:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.626 11:32:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.626 11:32:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.626 [2024-07-15 11:32:53.679930] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:25.626 [2024-07-15 11:32:53.679984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1789229 ] 00:06:25.626 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.884 [2024-07-15 11:32:53.749004] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.884 [2024-07-15 11:32:53.821956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.452 11:32:54 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.452 11:32:54 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:26.452 11:32:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1789229 00:06:26.452 11:32:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1789229 00:06:26.452 11:32:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.710 lslocks: write error 00:06:26.710 11:32:54 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1789229 00:06:26.711 11:32:54 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1789229 ']' 00:06:26.711 11:32:54 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1789229 00:06:26.711 11:32:54 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:26.711 11:32:54 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:26.711 11:32:54 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1789229 00:06:26.970 11:32:54 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:26.970 11:32:54 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:26.970 11:32:54 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1789229' 00:06:26.970 killing process with pid 1789229 00:06:26.970 11:32:54 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1789229 00:06:26.970 11:32:54 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1789229 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1789229 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1789229 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1789229 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1789229 ']' 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1789229) - No such process 00:06:27.230 ERROR: process (pid: 1789229) is no longer running 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:27.230 00:06:27.230 real 0m1.499s 00:06:27.230 user 0m1.547s 00:06:27.230 sys 0m0.511s 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.230 11:32:55 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.230 ************************************ 00:06:27.230 END TEST default_locks 00:06:27.230 ************************************ 00:06:27.230 11:32:55 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:27.230 11:32:55 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:27.230 11:32:55 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:27.230 11:32:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.230 11:32:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.230 ************************************ 00:06:27.230 START TEST default_locks_via_rpc 00:06:27.230 ************************************ 00:06:27.230 11:32:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:27.230 11:32:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1789525 00:06:27.231 11:32:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1789525 00:06:27.231 11:32:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1789525 ']' 00:06:27.231 11:32:55 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.231 11:32:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.231 11:32:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.231 11:32:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.231 11:32:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.231 11:32:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.231 [2024-07-15 11:32:55.243298] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:27.231 [2024-07-15 11:32:55.243344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1789525 ] 00:06:27.231 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.231 [2024-07-15 11:32:55.310848] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.489 [2024-07-15 11:32:55.386830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.057 11:32:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.057 11:32:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:28.057 11:32:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:28.057 11:32:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.057 11:32:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.057 11:32:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.057 11:32:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:28.057 11:32:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:28.057 11:32:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:28.057 11:32:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:28.057 11:32:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:28.057 11:32:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.057 11:32:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.057 11:32:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.057 11:32:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1789525 00:06:28.057 11:32:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1789525 00:06:28.057 11:32:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.316 11:32:56 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1789525 00:06:28.316 11:32:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1789525 ']' 00:06:28.316 11:32:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1789525 00:06:28.316 11:32:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:28.316 11:32:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.316 11:32:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1789525 00:06:28.316 11:32:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:28.316 11:32:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:28.316 11:32:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1789525' 00:06:28.316 killing process with pid 1789525 00:06:28.316 11:32:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1789525 00:06:28.316 11:32:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1789525 00:06:28.575 00:06:28.575 real 0m1.452s 00:06:28.575 user 0m1.505s 00:06:28.575 sys 0m0.480s 00:06:28.575 11:32:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.575 11:32:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.575 ************************************ 00:06:28.575 END TEST default_locks_via_rpc 00:06:28.575 ************************************ 00:06:28.835 11:32:56 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:28.835 11:32:56 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:28.835 11:32:56 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.835 11:32:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.835 11:32:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.836 ************************************ 00:06:28.836 START TEST non_locking_app_on_locked_coremask 00:06:28.836 ************************************ 00:06:28.836 11:32:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:28.836 11:32:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1789819 00:06:28.836 11:32:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1789819 /var/tmp/spdk.sock 00:06:28.836 11:32:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.836 11:32:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1789819 ']' 00:06:28.836 11:32:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.836 11:32:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.836 11:32:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.836 11:32:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.836 11:32:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.836 [2024-07-15 11:32:56.792490] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:28.836 [2024-07-15 11:32:56.792537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1789819 ] 00:06:28.836 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.836 [2024-07-15 11:32:56.860096] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.836 [2024-07-15 11:32:56.923758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.773 11:32:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.773 11:32:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:29.773 11:32:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1790078 00:06:29.773 11:32:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1790078 /var/tmp/spdk2.sock 00:06:29.773 11:32:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:29.773 11:32:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1790078 ']' 00:06:29.773 11:32:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.773 11:32:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.773 11:32:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.773 11:32:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.773 11:32:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.773 [2024-07-15 11:32:57.630947] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:29.773 [2024-07-15 11:32:57.630996] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1790078 ] 00:06:29.773 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.773 [2024-07-15 11:32:57.727108] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.774 [2024-07-15 11:32:57.727139] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.774 [2024-07-15 11:32:57.864292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.343 11:32:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.343 11:32:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:30.343 11:32:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1789819 00:06:30.343 11:32:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1789819 00:06:30.343 11:32:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.722 lslocks: write error 00:06:31.722 11:32:59 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1789819 00:06:31.722 11:32:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1789819 ']' 00:06:31.722 11:32:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1789819 00:06:31.722 11:32:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:31.722 11:32:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.722 11:32:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1789819 00:06:31.722 11:32:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.722 11:32:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.722 11:32:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1789819' 00:06:31.722 killing process with pid 1789819 00:06:31.722 11:32:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1789819 00:06:31.722 11:32:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1789819 00:06:32.291 11:33:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1790078 00:06:32.291 11:33:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1790078 ']' 00:06:32.291 11:33:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1790078 00:06:32.291 11:33:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:32.291 11:33:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:32.291 11:33:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1790078 00:06:32.291 11:33:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:32.291 11:33:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:32.291 11:33:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1790078' 00:06:32.291 killing process with pid 1790078 00:06:32.291 11:33:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1790078 00:06:32.291 11:33:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1790078 00:06:32.860 00:06:32.861 real 0m3.933s 00:06:32.861 user 0m4.202s 00:06:32.861 sys 0m1.312s 00:06:32.861 11:33:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.861 11:33:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.861 ************************************ 00:06:32.861 END TEST non_locking_app_on_locked_coremask 00:06:32.861 ************************************ 00:06:32.861 11:33:00 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:32.861 11:33:00 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:32.861 11:33:00 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.861 11:33:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.861 11:33:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.861 ************************************ 00:06:32.861 START TEST locking_app_on_unlocked_coremask 00:06:32.861 ************************************ 00:06:32.861 11:33:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:32.861 11:33:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1790697 00:06:32.861 11:33:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1790697 /var/tmp/spdk.sock 00:06:32.861 11:33:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1790697 ']' 00:06:32.861 11:33:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.861 11:33:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.861 11:33:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.861 11:33:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.861 11:33:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.861 11:33:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:32.861 [2024-07-15 11:33:00.793659] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:32.861 [2024-07-15 11:33:00.793703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1790697 ] 00:06:32.861 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.861 [2024-07-15 11:33:00.862302] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.861 [2024-07-15 11:33:00.862325] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.861 [2024-07-15 11:33:00.935561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.799 11:33:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.799 11:33:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:33.799 11:33:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1790791 00:06:33.799 11:33:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:33.799 11:33:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1790791 /var/tmp/spdk2.sock 00:06:33.799 11:33:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1790791 ']' 00:06:33.799 11:33:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.799 11:33:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.799 11:33:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.799 11:33:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.799 11:33:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.799 [2024-07-15 11:33:01.602090] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:33.799 [2024-07-15 11:33:01.602147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1790791 ] 00:06:33.799 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.799 [2024-07-15 11:33:01.696499] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.799 [2024-07-15 11:33:01.839416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.367 11:33:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.367 11:33:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:34.367 11:33:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1790791 00:06:34.367 11:33:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1790791 00:06:34.367 11:33:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.746 lslocks: write error 00:06:35.746 11:33:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1790697 00:06:35.746 11:33:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1790697 ']' 00:06:35.746 11:33:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1790697 00:06:35.746 11:33:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:35.746 11:33:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.746 11:33:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1790697 00:06:35.746 11:33:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.746 11:33:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.746 11:33:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1790697' 00:06:35.746 killing process with pid 1790697 00:06:35.746 11:33:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1790697 00:06:35.746 11:33:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1790697 00:06:36.313 11:33:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1790791 00:06:36.313 11:33:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1790791 ']' 00:06:36.313 11:33:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1790791 00:06:36.313 11:33:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:36.313 11:33:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.313 11:33:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1790791 00:06:36.313 11:33:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.313 11:33:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.313 11:33:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1790791' 00:06:36.313 killing process with pid 1790791 00:06:36.313 11:33:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1790791 00:06:36.313 11:33:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1790791 00:06:36.571 00:06:36.571 real 0m3.901s 00:06:36.571 user 0m4.157s 00:06:36.571 sys 0m1.267s 00:06:36.571 11:33:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.571 11:33:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.571 ************************************ 00:06:36.571 END TEST locking_app_on_unlocked_coremask 00:06:36.571 ************************************ 00:06:36.830 11:33:04 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:36.831 11:33:04 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:36.831 11:33:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.831 11:33:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.831 11:33:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.831 ************************************ 00:06:36.831 START TEST locking_app_on_locked_coremask 00:06:36.831 ************************************ 00:06:36.831 11:33:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:36.831 11:33:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1791682 00:06:36.831 11:33:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1791682 /var/tmp/spdk.sock 00:06:36.831 11:33:04 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.831 11:33:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1791682 ']' 00:06:36.831 11:33:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.831 11:33:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.831 11:33:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.831 11:33:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.831 11:33:04 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:36.831 [2024-07-15 11:33:04.768038] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:36.831 [2024-07-15 11:33:04.768090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1791682 ] 00:06:36.831 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.831 [2024-07-15 11:33:04.839180] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.831 [2024-07-15 11:33:04.910384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.768 11:33:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.768 11:33:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:37.768 11:33:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1792007 00:06:37.768 11:33:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1792007 /var/tmp/spdk2.sock 00:06:37.768 11:33:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:37.768 11:33:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:37.768 11:33:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1792007 /var/tmp/spdk2.sock 00:06:37.768 11:33:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:37.768 11:33:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.768 11:33:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:37.768 11:33:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.768 11:33:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1792007 /var/tmp/spdk2.sock 00:06:37.768 11:33:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1792007 ']' 00:06:37.768 11:33:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:37.768 11:33:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.768 11:33:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:37.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:37.769 11:33:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.769 11:33:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.769 [2024-07-15 11:33:05.634705] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:37.769 [2024-07-15 11:33:05.634756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1792007 ] 00:06:37.769 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.769 [2024-07-15 11:33:05.729524] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1791682 has claimed it. 00:06:37.769 [2024-07-15 11:33:05.729565] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:38.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1792007) - No such process 00:06:38.337 ERROR: process (pid: 1792007) is no longer running 00:06:38.337 11:33:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.337 11:33:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:38.337 11:33:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:38.337 11:33:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:38.337 11:33:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:38.337 11:33:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:38.337 11:33:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1791682 00:06:38.337 11:33:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1791682 00:06:38.337 11:33:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.906 lslocks: write error 00:06:38.906 11:33:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1791682 00:06:38.906 11:33:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1791682 ']' 00:06:38.906 11:33:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1791682 00:06:38.906 11:33:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:38.906 11:33:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:38.906 11:33:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1791682 00:06:38.906 11:33:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:38.906 11:33:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:38.906 11:33:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1791682' 00:06:38.906 killing process with pid 1791682 00:06:38.906 11:33:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1791682 00:06:38.906 11:33:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1791682 00:06:39.475 00:06:39.475 real 0m2.580s 00:06:39.475 user 0m2.822s 00:06:39.475 sys 0m0.778s 00:06:39.475 11:33:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.475 11:33:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.475 ************************************ 00:06:39.475 END TEST locking_app_on_locked_coremask 00:06:39.475 ************************************ 00:06:39.475 11:33:07 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:39.475 11:33:07 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:39.475 11:33:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:39.475 11:33:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.475 11:33:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.475 ************************************ 00:06:39.475 START TEST locking_overlapped_coremask 00:06:39.475 ************************************ 00:06:39.475 11:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:39.475 11:33:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1792317 00:06:39.475 11:33:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1792317 /var/tmp/spdk.sock 00:06:39.475 11:33:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:39.475 11:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1792317 ']' 00:06:39.475 11:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.475 11:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.475 11:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.475 11:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.475 11:33:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.475 [2024-07-15 11:33:07.449009] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:39.475 [2024-07-15 11:33:07.449057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1792317 ] 00:06:39.475 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.475 [2024-07-15 11:33:07.522911] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.735 [2024-07-15 11:33:07.597422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.735 [2024-07-15 11:33:07.597438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.735 [2024-07-15 11:33:07.597445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.304 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.304 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:40.304 11:33:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1792581 00:06:40.304 11:33:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1792581 /var/tmp/spdk2.sock 00:06:40.304 11:33:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:40.304 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:40.304 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1792581 /var/tmp/spdk2.sock 00:06:40.304 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:40.304 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.304 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:40.304 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.304 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1792581 /var/tmp/spdk2.sock 00:06:40.304 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1792581 ']' 00:06:40.304 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.304 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.304 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.304 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.304 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.304 [2024-07-15 11:33:08.291544] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:40.304 [2024-07-15 11:33:08.291600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1792581 ] 00:06:40.304 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.304 [2024-07-15 11:33:08.388515] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1792317 has claimed it. 00:06:40.304 [2024-07-15 11:33:08.388557] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:40.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1792581) - No such process 00:06:40.873 ERROR: process (pid: 1792581) is no longer running 00:06:40.873 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.873 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:40.873 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:40.873 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.873 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:40.873 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.873 11:33:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:40.873 11:33:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:40.873 11:33:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:40.873 11:33:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:40.873 11:33:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1792317 00:06:40.873 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1792317 ']' 00:06:40.873 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1792317 00:06:40.873 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:40.873 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.873 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1792317 00:06:40.873 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.873 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.873 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1792317' 00:06:40.873 killing process with pid 1792317 00:06:40.873 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1792317 00:06:40.873 11:33:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1792317 00:06:41.442 00:06:41.442 real 0m1.881s 00:06:41.442 user 0m5.218s 00:06:41.442 sys 0m0.458s 00:06:41.442 11:33:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.442 11:33:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.442 ************************************ 00:06:41.442 END TEST locking_overlapped_coremask 00:06:41.442 ************************************ 00:06:41.442 11:33:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:41.442 11:33:09 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:41.442 11:33:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.442 11:33:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.442 11:33:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.442 ************************************ 00:06:41.442 START TEST locking_overlapped_coremask_via_rpc 00:06:41.442 ************************************ 00:06:41.442 11:33:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:41.442 11:33:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1792717 00:06:41.442 11:33:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1792717 /var/tmp/spdk.sock 00:06:41.442 11:33:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1792717 ']' 00:06:41.442 11:33:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.442 11:33:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.442 11:33:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.442 11:33:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.442 11:33:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:41.442 11:33:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:41.442 [2024-07-15 11:33:09.391296] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:41.443 [2024-07-15 11:33:09.391337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1792717 ] 00:06:41.443 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.443 [2024-07-15 11:33:09.460370] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.443 [2024-07-15 11:33:09.460394] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:41.443 [2024-07-15 11:33:09.536830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.443 [2024-07-15 11:33:09.536853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.443 [2024-07-15 11:33:09.536855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.381 11:33:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.381 11:33:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:42.381 11:33:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1792891 00:06:42.381 11:33:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1792891 /var/tmp/spdk2.sock 00:06:42.381 11:33:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:42.381 11:33:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1792891 ']' 00:06:42.381 11:33:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.381 11:33:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.381 11:33:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.381 11:33:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.381 11:33:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.381 [2024-07-15 11:33:10.246621] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:42.381 [2024-07-15 11:33:10.246672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1792891 ] 00:06:42.381 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.381 [2024-07-15 11:33:10.344906] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.381 [2024-07-15 11:33:10.344934] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.640 [2024-07-15 11:33:10.491389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.640 [2024-07-15 11:33:10.494881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.640 [2024-07-15 11:33:10.494882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.212 [2024-07-15 11:33:11.076913] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1792717 has claimed it. 00:06:43.212 request: 00:06:43.212 { 00:06:43.212 "method": "framework_enable_cpumask_locks", 00:06:43.212 "req_id": 1 00:06:43.212 } 00:06:43.212 Got JSON-RPC error response 00:06:43.212 response: 00:06:43.212 { 00:06:43.212 "code": -32603, 00:06:43.212 "message": "Failed to claim CPU core: 2" 00:06:43.212 } 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1792717 /var/tmp/spdk.sock 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1792717 ']' 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1792891 /var/tmp/spdk2.sock 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1792891 ']' 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.212 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.529 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.529 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:43.529 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:43.529 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:43.529 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:43.529 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:43.529 00:06:43.529 real 0m2.112s 00:06:43.529 user 0m0.836s 00:06:43.529 sys 0m0.205s 00:06:43.529 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.529 11:33:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.529 ************************************ 00:06:43.529 END TEST locking_overlapped_coremask_via_rpc 00:06:43.529 ************************************ 00:06:43.529 11:33:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:43.529 11:33:11 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:43.529 11:33:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1792717 ]] 00:06:43.529 11:33:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1792717 00:06:43.529 11:33:11 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1792717 ']' 00:06:43.529 11:33:11 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1792717 00:06:43.529 11:33:11 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:43.529 11:33:11 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.529 11:33:11 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1792717 00:06:43.529 11:33:11 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.529 11:33:11 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.529 11:33:11 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1792717' 00:06:43.529 killing process with pid 1792717 00:06:43.529 11:33:11 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1792717 00:06:43.529 11:33:11 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1792717 00:06:43.789 11:33:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1792891 ]] 00:06:43.789 11:33:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1792891 00:06:43.789 11:33:11 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1792891 ']' 00:06:43.789 11:33:11 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1792891 00:06:43.789 11:33:11 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:43.789 11:33:11 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.789 11:33:11 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1792891 00:06:44.048 11:33:11 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:44.048 11:33:11 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:44.048 11:33:11 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1792891' 00:06:44.048 killing process with pid 1792891 00:06:44.048 11:33:11 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1792891 00:06:44.048 11:33:11 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1792891 00:06:44.308 11:33:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:44.308 11:33:12 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:44.308 11:33:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1792717 ]] 00:06:44.308 11:33:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1792717 00:06:44.308 11:33:12 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1792717 ']' 00:06:44.308 11:33:12 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1792717 00:06:44.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1792717) - No such process 00:06:44.308 11:33:12 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1792717 is not found' 00:06:44.308 Process with pid 1792717 is not found 00:06:44.308 11:33:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1792891 ]] 00:06:44.308 11:33:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1792891 00:06:44.308 11:33:12 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1792891 ']' 00:06:44.308 11:33:12 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1792891 00:06:44.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1792891) - No such process 00:06:44.308 11:33:12 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1792891 is not found' 00:06:44.308 Process with pid 1792891 is not found 00:06:44.308 11:33:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:44.308 00:06:44.308 real 0m18.735s 00:06:44.308 user 0m30.917s 00:06:44.308 sys 0m6.024s 00:06:44.308 11:33:12 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.308 11:33:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.308 ************************************ 00:06:44.308 END TEST cpu_locks 00:06:44.308 ************************************ 00:06:44.308 11:33:12 event -- common/autotest_common.sh@1142 -- # return 0 00:06:44.308 00:06:44.308 real 0m44.081s 00:06:44.308 user 1m21.394s 00:06:44.308 sys 0m10.170s 00:06:44.308 11:33:12 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.308 11:33:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.308 ************************************ 00:06:44.308 END TEST event 00:06:44.308 ************************************ 00:06:44.308 11:33:12 -- common/autotest_common.sh@1142 -- # return 0 00:06:44.308 11:33:12 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:44.308 11:33:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.308 11:33:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.308 11:33:12 -- common/autotest_common.sh@10 -- # set +x 00:06:44.308 ************************************ 00:06:44.308 START TEST thread 00:06:44.308 ************************************ 00:06:44.308 11:33:12 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:44.567 * Looking for test storage... 00:06:44.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:44.567 11:33:12 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:44.567 11:33:12 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:44.567 11:33:12 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.567 11:33:12 thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.567 ************************************ 00:06:44.567 START TEST thread_poller_perf 00:06:44.567 ************************************ 00:06:44.567 11:33:12 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:44.567 [2024-07-15 11:33:12.522442] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:44.567 [2024-07-15 11:33:12.522520] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1793378 ] 00:06:44.568 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.568 [2024-07-15 11:33:12.597298] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.568 [2024-07-15 11:33:12.667830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.568 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:45.946 ====================================== 00:06:45.946 busy:2508979268 (cyc) 00:06:45.946 total_run_count: 435000 00:06:45.946 tsc_hz: 2500000000 (cyc) 00:06:45.946 ====================================== 00:06:45.946 poller_cost: 5767 (cyc), 2306 (nsec) 00:06:45.946 00:06:45.946 real 0m1.238s 00:06:45.946 user 0m1.146s 00:06:45.946 sys 0m0.088s 00:06:45.946 11:33:13 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.946 11:33:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:45.946 ************************************ 00:06:45.946 END TEST thread_poller_perf 00:06:45.946 ************************************ 00:06:45.946 11:33:13 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:45.946 11:33:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:45.946 11:33:13 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:45.946 11:33:13 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.946 11:33:13 thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.946 ************************************ 00:06:45.946 START TEST thread_poller_perf 00:06:45.946 ************************************ 00:06:45.946 11:33:13 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:45.946 [2024-07-15 11:33:13.817018] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:45.946 [2024-07-15 11:33:13.817087] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1793543 ] 00:06:45.946 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.946 [2024-07-15 11:33:13.889186] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.946 [2024-07-15 11:33:13.962281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.946 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:47.325 ====================================== 00:06:47.325 busy:2501799576 (cyc) 00:06:47.325 total_run_count: 5633000 00:06:47.325 tsc_hz: 2500000000 (cyc) 00:06:47.325 ====================================== 00:06:47.325 poller_cost: 444 (cyc), 177 (nsec) 00:06:47.325 00:06:47.325 real 0m1.235s 00:06:47.325 user 0m1.147s 00:06:47.325 sys 0m0.084s 00:06:47.325 11:33:15 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.325 11:33:15 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:47.325 ************************************ 00:06:47.325 END TEST thread_poller_perf 00:06:47.325 ************************************ 00:06:47.325 11:33:15 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:47.325 11:33:15 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:47.325 00:06:47.325 real 0m2.692s 00:06:47.325 user 0m2.378s 00:06:47.325 sys 0m0.325s 00:06:47.325 11:33:15 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.325 11:33:15 thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.325 ************************************ 00:06:47.325 END TEST thread 00:06:47.325 ************************************ 00:06:47.325 11:33:15 -- common/autotest_common.sh@1142 -- # return 0 00:06:47.325 11:33:15 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:47.325 11:33:15 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.325 11:33:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.325 11:33:15 -- common/autotest_common.sh@10 -- # set +x 00:06:47.325 ************************************ 00:06:47.325 START TEST accel 00:06:47.325 ************************************ 00:06:47.325 11:33:15 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:47.325 * Looking for test storage... 00:06:47.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:47.325 11:33:15 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:47.325 11:33:15 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:47.325 11:33:15 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:47.325 11:33:15 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1793868 00:06:47.325 11:33:15 accel -- accel/accel.sh@63 -- # waitforlisten 1793868 00:06:47.325 11:33:15 accel -- common/autotest_common.sh@829 -- # '[' -z 1793868 ']' 00:06:47.325 11:33:15 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.325 11:33:15 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.325 11:33:15 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.325 11:33:15 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.325 11:33:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.325 11:33:15 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:47.325 11:33:15 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:47.325 11:33:15 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.325 11:33:15 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.325 11:33:15 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.325 11:33:15 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.325 11:33:15 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.325 11:33:15 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:47.325 11:33:15 accel -- accel/accel.sh@41 -- # jq -r . 00:06:47.325 [2024-07-15 11:33:15.308441] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:47.325 [2024-07-15 11:33:15.308498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1793868 ] 00:06:47.325 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.325 [2024-07-15 11:33:15.379237] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.584 [2024-07-15 11:33:15.457582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.152 11:33:16 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.152 11:33:16 accel -- common/autotest_common.sh@862 -- # return 0 00:06:48.152 11:33:16 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:48.152 11:33:16 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:48.152 11:33:16 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:48.152 11:33:16 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:48.152 11:33:16 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:48.152 11:33:16 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:48.152 11:33:16 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:48.152 11:33:16 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.152 11:33:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.152 11:33:16 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.152 11:33:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.152 11:33:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:48.152 11:33:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:48.152 11:33:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:48.152 11:33:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.152 11:33:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:48.152 11:33:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:48.152 11:33:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:48.152 11:33:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.152 11:33:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:48.152 11:33:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:48.152 11:33:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:48.152 11:33:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.152 11:33:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:48.152 11:33:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:48.153 11:33:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:48.153 11:33:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.153 11:33:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:48.153 11:33:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:48.153 11:33:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:48.153 11:33:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.153 11:33:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:48.153 11:33:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:48.153 11:33:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:48.153 11:33:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.153 11:33:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:48.153 11:33:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:48.153 11:33:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:48.153 11:33:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.153 11:33:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:48.153 11:33:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:48.153 11:33:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:48.153 11:33:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.153 11:33:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:48.153 11:33:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:48.153 11:33:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:48.153 11:33:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.153 11:33:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:48.153 11:33:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:48.153 11:33:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:48.153 11:33:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.153 11:33:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:48.153 11:33:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:48.153 11:33:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:48.153 11:33:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.153 11:33:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:48.153 11:33:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:48.153 11:33:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:48.153 11:33:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.153 11:33:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:48.153 11:33:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:48.153 11:33:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:48.153 11:33:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.153 11:33:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:48.153 11:33:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:48.153 11:33:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:48.153 11:33:16 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:48.153 11:33:16 accel -- accel/accel.sh@72 -- # IFS== 00:06:48.153 11:33:16 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:48.153 11:33:16 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:48.153 11:33:16 accel -- accel/accel.sh@75 -- # killprocess 1793868 00:06:48.153 11:33:16 accel -- common/autotest_common.sh@948 -- # '[' -z 1793868 ']' 00:06:48.153 11:33:16 accel -- common/autotest_common.sh@952 -- # kill -0 1793868 00:06:48.153 11:33:16 accel -- common/autotest_common.sh@953 -- # uname 00:06:48.153 11:33:16 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.153 11:33:16 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1793868 00:06:48.153 11:33:16 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.153 11:33:16 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.153 11:33:16 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1793868' 00:06:48.153 killing process with pid 1793868 00:06:48.153 11:33:16 accel -- common/autotest_common.sh@967 -- # kill 1793868 00:06:48.153 11:33:16 accel -- common/autotest_common.sh@972 -- # wait 1793868 00:06:48.412 11:33:16 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:48.412 11:33:16 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:48.412 11:33:16 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:48.412 11:33:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.412 11:33:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.670 11:33:16 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:48.670 11:33:16 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:48.670 11:33:16 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:48.670 11:33:16 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.670 11:33:16 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.670 11:33:16 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.670 11:33:16 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.670 11:33:16 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.670 11:33:16 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:48.670 11:33:16 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:48.670 11:33:16 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.670 11:33:16 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:48.670 11:33:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.670 11:33:16 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:48.670 11:33:16 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:48.670 11:33:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.670 11:33:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.670 ************************************ 00:06:48.670 START TEST accel_missing_filename 00:06:48.670 ************************************ 00:06:48.670 11:33:16 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:48.670 11:33:16 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:48.670 11:33:16 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:48.670 11:33:16 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:48.670 11:33:16 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.670 11:33:16 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:48.670 11:33:16 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:48.670 11:33:16 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:48.670 11:33:16 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:48.670 11:33:16 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.670 11:33:16 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.670 11:33:16 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:48.670 11:33:16 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.670 11:33:16 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.670 11:33:16 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.670 11:33:16 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:48.670 11:33:16 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:48.670 [2024-07-15 11:33:16.681676] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:48.670 [2024-07-15 11:33:16.681759] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1794167 ] 00:06:48.670 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.670 [2024-07-15 11:33:16.752839] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.929 [2024-07-15 11:33:16.826628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.929 [2024-07-15 11:33:16.867910] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.929 [2024-07-15 11:33:16.927874] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:48.929 A filename is required. 00:06:48.929 11:33:16 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:48.929 11:33:16 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.929 11:33:16 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:48.929 11:33:16 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:48.929 11:33:16 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:48.929 11:33:16 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.929 00:06:48.929 real 0m0.348s 00:06:48.929 user 0m0.236s 00:06:48.929 sys 0m0.131s 00:06:48.929 11:33:16 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.929 11:33:16 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:48.929 ************************************ 00:06:48.929 END TEST accel_missing_filename 00:06:48.929 ************************************ 00:06:49.188 11:33:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.188 11:33:17 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:49.188 11:33:17 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:49.188 11:33:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.188 11:33:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.188 ************************************ 00:06:49.188 START TEST accel_compress_verify 00:06:49.188 ************************************ 00:06:49.188 11:33:17 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:49.188 11:33:17 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:49.188 11:33:17 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:49.188 11:33:17 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:49.188 11:33:17 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.188 11:33:17 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:49.188 11:33:17 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.188 11:33:17 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:49.188 11:33:17 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:49.188 11:33:17 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:49.188 11:33:17 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.188 11:33:17 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.188 11:33:17 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.188 11:33:17 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.188 11:33:17 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.188 11:33:17 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:49.188 11:33:17 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:49.188 [2024-07-15 11:33:17.109997] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:49.188 [2024-07-15 11:33:17.110055] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1794308 ] 00:06:49.188 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.188 [2024-07-15 11:33:17.182321] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.188 [2024-07-15 11:33:17.250744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.188 [2024-07-15 11:33:17.292122] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:49.447 [2024-07-15 11:33:17.351474] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:49.447 00:06:49.447 Compression does not support the verify option, aborting. 00:06:49.447 11:33:17 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:49.447 11:33:17 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:49.447 11:33:17 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:49.447 11:33:17 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:49.447 11:33:17 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:49.447 11:33:17 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:49.447 00:06:49.447 real 0m0.342s 00:06:49.447 user 0m0.240s 00:06:49.447 sys 0m0.138s 00:06:49.447 11:33:17 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.447 11:33:17 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:49.447 ************************************ 00:06:49.447 END TEST accel_compress_verify 00:06:49.447 ************************************ 00:06:49.447 11:33:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.447 11:33:17 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:49.447 11:33:17 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:49.447 11:33:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.447 11:33:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.447 ************************************ 00:06:49.447 START TEST accel_wrong_workload 00:06:49.447 ************************************ 00:06:49.447 11:33:17 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:49.447 11:33:17 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:49.447 11:33:17 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:49.448 11:33:17 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:49.448 11:33:17 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.448 11:33:17 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:49.448 11:33:17 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.448 11:33:17 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:49.448 11:33:17 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:49.448 11:33:17 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:49.448 11:33:17 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.448 11:33:17 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.448 11:33:17 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.448 11:33:17 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.448 11:33:17 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.448 11:33:17 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:49.448 11:33:17 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:49.448 Unsupported workload type: foobar 00:06:49.448 [2024-07-15 11:33:17.519858] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:49.448 accel_perf options: 00:06:49.448 [-h help message] 00:06:49.448 [-q queue depth per core] 00:06:49.448 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:49.448 [-T number of threads per core 00:06:49.448 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:49.448 [-t time in seconds] 00:06:49.448 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:49.448 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:49.448 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:49.448 [-l for compress/decompress workloads, name of uncompressed input file 00:06:49.448 [-S for crc32c workload, use this seed value (default 0) 00:06:49.448 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:49.448 [-f for fill workload, use this BYTE value (default 255) 00:06:49.448 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:49.448 [-y verify result if this switch is on] 00:06:49.448 [-a tasks to allocate per core (default: same value as -q)] 00:06:49.448 Can be used to spread operations across a wider range of memory. 00:06:49.448 11:33:17 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:49.448 11:33:17 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:49.448 11:33:17 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:49.448 11:33:17 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:49.448 00:06:49.448 real 0m0.033s 00:06:49.448 user 0m0.018s 00:06:49.448 sys 0m0.015s 00:06:49.448 11:33:17 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.448 11:33:17 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:49.448 ************************************ 00:06:49.448 END TEST accel_wrong_workload 00:06:49.448 ************************************ 00:06:49.448 Error: writing output failed: Broken pipe 00:06:49.706 11:33:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.706 11:33:17 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:49.706 11:33:17 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:49.706 11:33:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.706 11:33:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.706 ************************************ 00:06:49.706 START TEST accel_negative_buffers 00:06:49.706 ************************************ 00:06:49.706 11:33:17 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:49.706 11:33:17 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:49.706 11:33:17 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:49.706 11:33:17 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:49.706 11:33:17 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.706 11:33:17 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:49.707 11:33:17 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:49.707 11:33:17 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:49.707 11:33:17 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:49.707 11:33:17 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:49.707 11:33:17 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.707 11:33:17 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.707 11:33:17 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.707 11:33:17 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.707 11:33:17 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.707 11:33:17 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:49.707 11:33:17 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:49.707 -x option must be non-negative. 00:06:49.707 [2024-07-15 11:33:17.638480] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:49.707 accel_perf options: 00:06:49.707 [-h help message] 00:06:49.707 [-q queue depth per core] 00:06:49.707 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:49.707 [-T number of threads per core 00:06:49.707 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:49.707 [-t time in seconds] 00:06:49.707 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:49.707 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:49.707 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:49.707 [-l for compress/decompress workloads, name of uncompressed input file 00:06:49.707 [-S for crc32c workload, use this seed value (default 0) 00:06:49.707 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:49.707 [-f for fill workload, use this BYTE value (default 255) 00:06:49.707 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:49.707 [-y verify result if this switch is on] 00:06:49.707 [-a tasks to allocate per core (default: same value as -q)] 00:06:49.707 Can be used to spread operations across a wider range of memory. 00:06:49.707 11:33:17 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:49.707 11:33:17 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:49.707 11:33:17 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:49.707 11:33:17 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:49.707 00:06:49.707 real 0m0.037s 00:06:49.707 user 0m0.016s 00:06:49.707 sys 0m0.021s 00:06:49.707 11:33:17 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.707 11:33:17 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:49.707 ************************************ 00:06:49.707 END TEST accel_negative_buffers 00:06:49.707 ************************************ 00:06:49.707 Error: writing output failed: Broken pipe 00:06:49.707 11:33:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.707 11:33:17 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:49.707 11:33:17 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:49.707 11:33:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.707 11:33:17 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.707 ************************************ 00:06:49.707 START TEST accel_crc32c 00:06:49.707 ************************************ 00:06:49.707 11:33:17 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:49.707 11:33:17 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:49.707 11:33:17 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:49.707 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.707 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.707 11:33:17 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:49.707 11:33:17 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:49.707 11:33:17 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:49.707 11:33:17 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.707 11:33:17 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.707 11:33:17 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.707 11:33:17 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.707 11:33:17 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.707 11:33:17 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:49.707 11:33:17 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:49.707 [2024-07-15 11:33:17.757814] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:49.707 [2024-07-15 11:33:17.757881] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1794500 ] 00:06:49.707 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.965 [2024-07-15 11:33:17.831280] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.965 [2024-07-15 11:33:17.905302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.965 11:33:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:49.966 11:33:17 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.342 11:33:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.342 11:33:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.342 11:33:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.342 11:33:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.342 11:33:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.343 11:33:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.343 11:33:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.343 11:33:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.343 11:33:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.343 11:33:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.343 11:33:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.343 11:33:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.343 11:33:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.343 11:33:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.343 11:33:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.343 11:33:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.343 11:33:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.343 11:33:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.343 11:33:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.343 11:33:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.343 11:33:19 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:51.343 11:33:19 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:51.343 11:33:19 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:51.343 11:33:19 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:51.343 11:33:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.343 11:33:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:51.343 11:33:19 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.343 00:06:51.343 real 0m1.350s 00:06:51.343 user 0m1.223s 00:06:51.343 sys 0m0.132s 00:06:51.343 11:33:19 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.343 11:33:19 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:51.343 ************************************ 00:06:51.343 END TEST accel_crc32c 00:06:51.343 ************************************ 00:06:51.343 11:33:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.343 11:33:19 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:51.343 11:33:19 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:51.343 11:33:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.343 11:33:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.343 ************************************ 00:06:51.343 START TEST accel_crc32c_C2 00:06:51.343 ************************************ 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:51.343 [2024-07-15 11:33:19.174431] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:51.343 [2024-07-15 11:33:19.174488] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1794790 ] 00:06:51.343 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.343 [2024-07-15 11:33:19.242706] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.343 [2024-07-15 11:33:19.310223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:51.343 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:51.344 11:33:19 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.722 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.722 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.723 00:06:52.723 real 0m1.337s 00:06:52.723 user 0m1.212s 00:06:52.723 sys 0m0.129s 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.723 11:33:20 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:52.723 ************************************ 00:06:52.723 END TEST accel_crc32c_C2 00:06:52.723 ************************************ 00:06:52.723 11:33:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:52.723 11:33:20 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:52.723 11:33:20 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:52.723 11:33:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.723 11:33:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.723 ************************************ 00:06:52.723 START TEST accel_copy 00:06:52.723 ************************************ 00:06:52.723 11:33:20 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:52.723 [2024-07-15 11:33:20.581682] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:52.723 [2024-07-15 11:33:20.581740] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1795067 ] 00:06:52.723 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.723 [2024-07-15 11:33:20.649337] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.723 [2024-07-15 11:33:20.717792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.723 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.724 11:33:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:52.724 11:33:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.724 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.724 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:52.724 11:33:20 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:52.724 11:33:20 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:52.724 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:52.724 11:33:20 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:54.129 11:33:21 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.129 00:06:54.129 real 0m1.335s 00:06:54.129 user 0m1.205s 00:06:54.129 sys 0m0.135s 00:06:54.129 11:33:21 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.129 11:33:21 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:54.129 ************************************ 00:06:54.129 END TEST accel_copy 00:06:54.129 ************************************ 00:06:54.129 11:33:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.129 11:33:21 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.129 11:33:21 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:54.129 11:33:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.129 11:33:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.129 ************************************ 00:06:54.129 START TEST accel_fill 00:06:54.129 ************************************ 00:06:54.129 11:33:21 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.129 11:33:21 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:54.129 11:33:21 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:54.129 11:33:21 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:21 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:21 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.129 11:33:21 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.129 11:33:21 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:54.129 11:33:21 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.129 11:33:21 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.129 11:33:21 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.129 11:33:21 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.129 11:33:21 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.129 11:33:21 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:54.129 11:33:21 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:54.129 [2024-07-15 11:33:21.986605] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:54.129 [2024-07-15 11:33:21.986663] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1795349 ] 00:06:54.129 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.129 [2024-07-15 11:33:22.054634] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.129 [2024-07-15 11:33:22.123480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:54.129 11:33:22 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:55.507 11:33:23 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.507 00:06:55.507 real 0m1.336s 00:06:55.507 user 0m1.208s 00:06:55.507 sys 0m0.133s 00:06:55.507 11:33:23 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.507 11:33:23 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:55.507 ************************************ 00:06:55.507 END TEST accel_fill 00:06:55.507 ************************************ 00:06:55.507 11:33:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.507 11:33:23 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:55.507 11:33:23 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:55.507 11:33:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.507 11:33:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.507 ************************************ 00:06:55.507 START TEST accel_copy_crc32c 00:06:55.507 ************************************ 00:06:55.507 11:33:23 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:55.507 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:55.507 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:55.507 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.507 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.507 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:55.507 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:55.508 [2024-07-15 11:33:23.393406] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:55.508 [2024-07-15 11:33:23.393467] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1795571 ] 00:06:55.508 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.508 [2024-07-15 11:33:23.462706] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.508 [2024-07-15 11:33:23.531519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:55.508 11:33:23 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.887 00:06:56.887 real 0m1.338s 00:06:56.887 user 0m1.214s 00:06:56.887 sys 0m0.129s 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.887 11:33:24 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:56.887 ************************************ 00:06:56.887 END TEST accel_copy_crc32c 00:06:56.887 ************************************ 00:06:56.887 11:33:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.887 11:33:24 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:56.887 11:33:24 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:56.887 11:33:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.887 11:33:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.887 ************************************ 00:06:56.887 START TEST accel_copy_crc32c_C2 00:06:56.887 ************************************ 00:06:56.887 11:33:24 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:56.887 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.887 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:56.887 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.887 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.887 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:56.887 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:56.887 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.887 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.887 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.887 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.887 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.887 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.887 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:56.887 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:56.887 [2024-07-15 11:33:24.804036] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:56.887 [2024-07-15 11:33:24.804115] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1795785 ] 00:06:56.888 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.888 [2024-07-15 11:33:24.874796] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.888 [2024-07-15 11:33:24.943947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:56.888 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.147 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.147 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.147 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:57.147 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:57.147 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:57.147 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:57.147 11:33:24 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.083 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.083 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.083 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.083 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.084 00:06:58.084 real 0m1.342s 00:06:58.084 user 0m1.217s 00:06:58.084 sys 0m0.130s 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.084 11:33:26 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:58.084 ************************************ 00:06:58.084 END TEST accel_copy_crc32c_C2 00:06:58.084 ************************************ 00:06:58.084 11:33:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.084 11:33:26 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:58.084 11:33:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:58.084 11:33:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.084 11:33:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.084 ************************************ 00:06:58.084 START TEST accel_dualcast 00:06:58.084 ************************************ 00:06:58.084 11:33:26 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:58.084 11:33:26 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:58.084 11:33:26 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:58.084 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.084 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.084 11:33:26 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:58.084 11:33:26 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:58.084 11:33:26 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:58.084 11:33:26 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.084 11:33:26 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.084 11:33:26 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.084 11:33:26 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.084 11:33:26 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.084 11:33:26 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:58.084 11:33:26 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:58.343 [2024-07-15 11:33:26.205627] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:58.343 [2024-07-15 11:33:26.205689] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1795992 ] 00:06:58.343 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.343 [2024-07-15 11:33:26.275161] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.343 [2024-07-15 11:33:26.344913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:58.343 11:33:26 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:59.721 11:33:27 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:59.722 11:33:27 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.722 11:33:27 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:59.722 11:33:27 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.722 00:06:59.722 real 0m1.335s 00:06:59.722 user 0m1.210s 00:06:59.722 sys 0m0.129s 00:06:59.722 11:33:27 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.722 11:33:27 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:59.722 ************************************ 00:06:59.722 END TEST accel_dualcast 00:06:59.722 ************************************ 00:06:59.722 11:33:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.722 11:33:27 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:59.722 11:33:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:59.722 11:33:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.722 11:33:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.722 ************************************ 00:06:59.722 START TEST accel_compare 00:06:59.722 ************************************ 00:06:59.722 11:33:27 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:59.722 [2024-07-15 11:33:27.614234] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:06:59.722 [2024-07-15 11:33:27.614290] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1796234 ] 00:06:59.722 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.722 [2024-07-15 11:33:27.685903] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.722 [2024-07-15 11:33:27.755989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:59.722 11:33:27 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.099 11:33:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.099 11:33:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.099 11:33:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.099 11:33:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.099 11:33:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.099 11:33:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.099 11:33:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.099 11:33:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.099 11:33:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.099 11:33:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.099 11:33:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.099 11:33:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.099 11:33:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.099 11:33:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.099 11:33:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.099 11:33:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.100 11:33:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.100 11:33:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.100 11:33:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.100 11:33:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.100 11:33:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:01.100 11:33:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:01.100 11:33:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:01.100 11:33:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:01.100 11:33:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.100 11:33:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:01.100 11:33:28 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.100 00:07:01.100 real 0m1.344s 00:07:01.100 user 0m1.233s 00:07:01.100 sys 0m0.116s 00:07:01.100 11:33:28 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.100 11:33:28 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:01.100 ************************************ 00:07:01.100 END TEST accel_compare 00:07:01.100 ************************************ 00:07:01.100 11:33:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:01.100 11:33:28 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:01.100 11:33:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:01.100 11:33:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.100 11:33:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.100 ************************************ 00:07:01.100 START TEST accel_xor 00:07:01.100 ************************************ 00:07:01.100 11:33:28 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:01.100 11:33:28 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:01.100 11:33:28 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:01.100 11:33:28 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.100 11:33:28 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.100 11:33:28 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:01.100 11:33:29 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:01.100 11:33:29 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:01.100 11:33:29 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.100 11:33:29 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.100 11:33:29 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.100 11:33:29 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.100 11:33:29 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.100 11:33:29 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:01.100 11:33:29 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:01.100 [2024-07-15 11:33:29.024901] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:07:01.100 [2024-07-15 11:33:29.024964] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1796511 ] 00:07:01.100 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.100 [2024-07-15 11:33:29.094561] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.100 [2024-07-15 11:33:29.162640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.100 11:33:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.358 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:01.359 11:33:29 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.296 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.296 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.296 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.296 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.296 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.296 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.296 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.296 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.297 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.297 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.297 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.297 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.297 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.297 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.297 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.297 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.297 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.297 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.297 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.297 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.297 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.297 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.297 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.297 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.297 11:33:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.297 11:33:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:02.297 11:33:30 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.297 00:07:02.297 real 0m1.339s 00:07:02.297 user 0m1.215s 00:07:02.297 sys 0m0.128s 00:07:02.297 11:33:30 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.297 11:33:30 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:02.297 ************************************ 00:07:02.297 END TEST accel_xor 00:07:02.297 ************************************ 00:07:02.297 11:33:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.297 11:33:30 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:02.297 11:33:30 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:02.297 11:33:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.297 11:33:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.557 ************************************ 00:07:02.557 START TEST accel_xor 00:07:02.557 ************************************ 00:07:02.557 11:33:30 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:02.557 [2024-07-15 11:33:30.434025] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:07:02.557 [2024-07-15 11:33:30.434091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1796797 ] 00:07:02.557 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.557 [2024-07-15 11:33:30.504370] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.557 [2024-07-15 11:33:30.571427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:02.557 11:33:30 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:03.938 11:33:31 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.938 00:07:03.938 real 0m1.338s 00:07:03.938 user 0m1.210s 00:07:03.938 sys 0m0.133s 00:07:03.938 11:33:31 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.938 11:33:31 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:03.938 ************************************ 00:07:03.938 END TEST accel_xor 00:07:03.938 ************************************ 00:07:03.938 11:33:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.938 11:33:31 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:03.938 11:33:31 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:03.938 11:33:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.938 11:33:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.938 ************************************ 00:07:03.938 START TEST accel_dif_verify 00:07:03.938 ************************************ 00:07:03.938 11:33:31 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:03.938 11:33:31 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:03.938 11:33:31 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:03.938 11:33:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.938 11:33:31 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.938 11:33:31 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:03.938 11:33:31 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:03.938 11:33:31 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:03.938 11:33:31 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.938 11:33:31 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.938 11:33:31 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.938 11:33:31 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.938 11:33:31 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.938 11:33:31 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:03.938 11:33:31 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:03.938 [2024-07-15 11:33:31.843586] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:07:03.938 [2024-07-15 11:33:31.843661] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1797076 ] 00:07:03.938 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.938 [2024-07-15 11:33:31.911187] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.938 [2024-07-15 11:33:31.978842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.938 11:33:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.938 11:33:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.938 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.938 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.938 11:33:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.938 11:33:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.938 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.938 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:03.939 11:33:32 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:05.346 11:33:33 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.346 00:07:05.346 real 0m1.335s 00:07:05.346 user 0m1.219s 00:07:05.346 sys 0m0.122s 00:07:05.346 11:33:33 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.346 11:33:33 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:05.346 ************************************ 00:07:05.346 END TEST accel_dif_verify 00:07:05.346 ************************************ 00:07:05.346 11:33:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.346 11:33:33 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:05.346 11:33:33 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:05.346 11:33:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.346 11:33:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.346 ************************************ 00:07:05.346 START TEST accel_dif_generate 00:07:05.346 ************************************ 00:07:05.346 11:33:33 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:05.346 [2024-07-15 11:33:33.246873] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:07:05.346 [2024-07-15 11:33:33.246932] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1797355 ] 00:07:05.346 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.346 [2024-07-15 11:33:33.315136] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.346 [2024-07-15 11:33:33.383245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:05.346 11:33:33 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:06.722 11:33:34 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.722 00:07:06.722 real 0m1.335s 00:07:06.722 user 0m1.211s 00:07:06.722 sys 0m0.129s 00:07:06.722 11:33:34 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.722 11:33:34 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:06.722 ************************************ 00:07:06.722 END TEST accel_dif_generate 00:07:06.722 ************************************ 00:07:06.722 11:33:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.722 11:33:34 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:06.722 11:33:34 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:06.722 11:33:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.722 11:33:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.722 ************************************ 00:07:06.722 START TEST accel_dif_generate_copy 00:07:06.722 ************************************ 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:06.722 [2024-07-15 11:33:34.645573] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:07:06.722 [2024-07-15 11:33:34.645632] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1797642 ] 00:07:06.722 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.722 [2024-07-15 11:33:34.713132] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.722 [2024-07-15 11:33:34.780681] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.722 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.723 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.982 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.982 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:06.982 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.982 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.982 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.982 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.982 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.982 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.982 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:06.982 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:06.982 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:06.982 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:06.982 11:33:34 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.918 00:07:07.918 real 0m1.328s 00:07:07.918 user 0m1.211s 00:07:07.918 sys 0m0.122s 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.918 11:33:35 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:07.918 ************************************ 00:07:07.918 END TEST accel_dif_generate_copy 00:07:07.918 ************************************ 00:07:07.918 11:33:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:07.918 11:33:35 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:07.918 11:33:35 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:07.918 11:33:35 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:07.918 11:33:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.918 11:33:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.918 ************************************ 00:07:07.918 START TEST accel_comp 00:07:07.918 ************************************ 00:07:07.918 11:33:36 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:07.918 11:33:36 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:07.918 11:33:36 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:07.918 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:07.918 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:07.918 11:33:36 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:08.177 11:33:36 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:08.177 11:33:36 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:08.177 11:33:36 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.177 11:33:36 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.177 11:33:36 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.177 11:33:36 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.177 11:33:36 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.177 11:33:36 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:08.177 11:33:36 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:08.177 [2024-07-15 11:33:36.047161] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:07:08.178 [2024-07-15 11:33:36.047223] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1797924 ] 00:07:08.178 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.178 [2024-07-15 11:33:36.114940] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.178 [2024-07-15 11:33:36.182410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:08.178 11:33:36 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:09.555 11:33:37 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.555 00:07:09.555 real 0m1.338s 00:07:09.555 user 0m1.221s 00:07:09.555 sys 0m0.122s 00:07:09.555 11:33:37 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.555 11:33:37 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:09.555 ************************************ 00:07:09.555 END TEST accel_comp 00:07:09.555 ************************************ 00:07:09.555 11:33:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.555 11:33:37 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:09.555 11:33:37 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:09.555 11:33:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.555 11:33:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.555 ************************************ 00:07:09.555 START TEST accel_decomp 00:07:09.555 ************************************ 00:07:09.555 11:33:37 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:09.555 [2024-07-15 11:33:37.461220] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:07:09.555 [2024-07-15 11:33:37.461276] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1798203 ] 00:07:09.555 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.555 [2024-07-15 11:33:37.532158] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.555 [2024-07-15 11:33:37.605627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.555 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:09.556 11:33:37 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:10.933 11:33:38 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.933 00:07:10.933 real 0m1.348s 00:07:10.933 user 0m1.218s 00:07:10.933 sys 0m0.136s 00:07:10.933 11:33:38 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:10.933 11:33:38 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:10.933 ************************************ 00:07:10.933 END TEST accel_decomp 00:07:10.933 ************************************ 00:07:10.933 11:33:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:10.933 11:33:38 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:10.933 11:33:38 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:10.933 11:33:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.933 11:33:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.933 ************************************ 00:07:10.933 START TEST accel_decomp_full 00:07:10.933 ************************************ 00:07:10.933 11:33:38 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:10.933 11:33:38 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:10.933 11:33:38 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:10.933 11:33:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:10.933 11:33:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:10.933 11:33:38 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:10.933 11:33:38 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:10.933 11:33:38 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:10.933 11:33:38 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.933 11:33:38 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.933 11:33:38 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.933 11:33:38 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.933 11:33:38 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.933 11:33:38 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:10.933 11:33:38 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:10.933 [2024-07-15 11:33:38.878594] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:07:10.933 [2024-07-15 11:33:38.878651] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1798450 ] 00:07:10.933 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.933 [2024-07-15 11:33:38.947963] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.933 [2024-07-15 11:33:39.016134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:11.193 11:33:39 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.129 11:33:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:12.129 11:33:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.129 11:33:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:12.130 11:33:40 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.130 00:07:12.130 real 0m1.348s 00:07:12.130 user 0m1.218s 00:07:12.130 sys 0m0.132s 00:07:12.130 11:33:40 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.130 11:33:40 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:12.130 ************************************ 00:07:12.130 END TEST accel_decomp_full 00:07:12.130 ************************************ 00:07:12.389 11:33:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:12.389 11:33:40 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:12.389 11:33:40 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:12.389 11:33:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.389 11:33:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.389 ************************************ 00:07:12.389 START TEST accel_decomp_mcore 00:07:12.389 ************************************ 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:12.389 [2024-07-15 11:33:40.301867] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:07:12.389 [2024-07-15 11:33:40.301950] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1798696 ] 00:07:12.389 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.389 [2024-07-15 11:33:40.373174] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:12.389 [2024-07-15 11:33:40.446424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.389 [2024-07-15 11:33:40.446522] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.389 [2024-07-15 11:33:40.446596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.389 [2024-07-15 11:33:40.446599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.389 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:12.648 11:33:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.585 00:07:13.585 real 0m1.362s 00:07:13.585 user 0m4.570s 00:07:13.585 sys 0m0.134s 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.585 11:33:41 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:13.585 ************************************ 00:07:13.585 END TEST accel_decomp_mcore 00:07:13.585 ************************************ 00:07:13.585 11:33:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:13.585 11:33:41 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.585 11:33:41 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:13.585 11:33:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.585 11:33:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.845 ************************************ 00:07:13.845 START TEST accel_decomp_full_mcore 00:07:13.845 ************************************ 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:13.845 [2024-07-15 11:33:41.746009] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:07:13.845 [2024-07-15 11:33:41.746081] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1798935 ] 00:07:13.845 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.845 [2024-07-15 11:33:41.817457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:13.845 [2024-07-15 11:33:41.890564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.845 [2024-07-15 11:33:41.890661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:13.845 [2024-07-15 11:33:41.890742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:13.845 [2024-07-15 11:33:41.890745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:13.845 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.846 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.846 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:13.846 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.846 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.846 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.846 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:13.846 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.846 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.846 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.846 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:13.846 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.846 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.846 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:13.846 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:13.846 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:13.846 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:13.846 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.105 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.105 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.105 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.105 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.105 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:14.105 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.105 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.105 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.105 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.105 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.105 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.105 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:14.105 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:14.105 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:14.105 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:14.105 11:33:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.040 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.041 00:07:15.041 real 0m1.375s 00:07:15.041 user 0m4.618s 00:07:15.041 sys 0m0.135s 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.041 11:33:43 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:15.041 ************************************ 00:07:15.041 END TEST accel_decomp_full_mcore 00:07:15.041 ************************************ 00:07:15.041 11:33:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.041 11:33:43 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:15.041 11:33:43 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:15.041 11:33:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.041 11:33:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.306 ************************************ 00:07:15.306 START TEST accel_decomp_mthread 00:07:15.306 ************************************ 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:15.306 [2024-07-15 11:33:43.204321] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:07:15.306 [2024-07-15 11:33:43.204402] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1799170 ] 00:07:15.306 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.306 [2024-07-15 11:33:43.275245] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.306 [2024-07-15 11:33:43.345004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.306 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:15.307 11:33:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.682 00:07:16.682 real 0m1.354s 00:07:16.682 user 0m1.240s 00:07:16.682 sys 0m0.130s 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.682 11:33:44 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:16.682 ************************************ 00:07:16.682 END TEST accel_decomp_mthread 00:07:16.682 ************************************ 00:07:16.682 11:33:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.682 11:33:44 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:16.682 11:33:44 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:16.682 11:33:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.682 11:33:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.682 ************************************ 00:07:16.682 START TEST accel_decomp_full_mthread 00:07:16.682 ************************************ 00:07:16.682 11:33:44 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:16.682 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:16.682 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:16.682 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.682 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.682 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:16.682 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:16.682 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:16.682 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.683 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.683 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.683 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.683 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.683 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:16.683 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:16.683 [2024-07-15 11:33:44.641536] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:07:16.683 [2024-07-15 11:33:44.641616] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1799390 ] 00:07:16.683 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.683 [2024-07-15 11:33:44.712064] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.683 [2024-07-15 11:33:44.781022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.942 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:16.943 11:33:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.879 00:07:17.879 real 0m1.372s 00:07:17.879 user 0m1.259s 00:07:17.879 sys 0m0.126s 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.879 11:33:45 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:17.879 ************************************ 00:07:17.879 END TEST accel_decomp_full_mthread 00:07:17.879 ************************************ 00:07:18.138 11:33:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.138 11:33:46 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:18.138 11:33:46 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:18.138 11:33:46 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:18.138 11:33:46 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:18.138 11:33:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.138 11:33:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.138 11:33:46 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.138 11:33:46 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.138 11:33:46 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.138 11:33:46 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.138 11:33:46 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.138 11:33:46 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:18.138 11:33:46 accel -- accel/accel.sh@41 -- # jq -r . 00:07:18.138 ************************************ 00:07:18.138 START TEST accel_dif_functional_tests 00:07:18.138 ************************************ 00:07:18.138 11:33:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:18.138 [2024-07-15 11:33:46.116491] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:07:18.138 [2024-07-15 11:33:46.116536] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1799662 ] 00:07:18.138 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.138 [2024-07-15 11:33:46.187750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.397 [2024-07-15 11:33:46.256157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.397 [2024-07-15 11:33:46.256254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.397 [2024-07-15 11:33:46.256255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.397 00:07:18.397 00:07:18.397 CUnit - A unit testing framework for C - Version 2.1-3 00:07:18.397 http://cunit.sourceforge.net/ 00:07:18.397 00:07:18.397 00:07:18.397 Suite: accel_dif 00:07:18.397 Test: verify: DIF generated, GUARD check ...passed 00:07:18.397 Test: verify: DIF generated, APPTAG check ...passed 00:07:18.397 Test: verify: DIF generated, REFTAG check ...passed 00:07:18.397 Test: verify: DIF not generated, GUARD check ...[2024-07-15 11:33:46.324575] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:18.397 passed 00:07:18.397 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 11:33:46.324626] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:18.397 passed 00:07:18.397 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 11:33:46.324648] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:18.397 passed 00:07:18.397 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:18.397 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 11:33:46.324694] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:18.397 passed 00:07:18.397 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:18.397 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:18.397 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:18.397 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 11:33:46.324794] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:18.397 passed 00:07:18.397 Test: verify copy: DIF generated, GUARD check ...passed 00:07:18.397 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:18.397 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:18.397 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 11:33:46.324910] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:18.397 passed 00:07:18.397 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 11:33:46.324940] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:18.397 passed 00:07:18.397 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 11:33:46.324964] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:18.397 passed 00:07:18.397 Test: generate copy: DIF generated, GUARD check ...passed 00:07:18.397 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:18.397 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:18.397 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:18.397 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:18.397 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:18.397 Test: generate copy: iovecs-len validate ...[2024-07-15 11:33:46.325125] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:18.397 passed 00:07:18.397 Test: generate copy: buffer alignment validate ...passed 00:07:18.397 00:07:18.397 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.397 suites 1 1 n/a 0 0 00:07:18.397 tests 26 26 26 0 0 00:07:18.397 asserts 115 115 115 0 n/a 00:07:18.397 00:07:18.397 Elapsed time = 0.002 seconds 00:07:18.397 00:07:18.397 real 0m0.423s 00:07:18.397 user 0m0.580s 00:07:18.397 sys 0m0.158s 00:07:18.397 11:33:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.397 11:33:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:18.397 ************************************ 00:07:18.397 END TEST accel_dif_functional_tests 00:07:18.397 ************************************ 00:07:18.657 11:33:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.657 00:07:18.657 real 0m31.384s 00:07:18.657 user 0m34.420s 00:07:18.657 sys 0m4.882s 00:07:18.657 11:33:46 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.657 11:33:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.657 ************************************ 00:07:18.657 END TEST accel 00:07:18.657 ************************************ 00:07:18.657 11:33:46 -- common/autotest_common.sh@1142 -- # return 0 00:07:18.657 11:33:46 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:18.657 11:33:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.657 11:33:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.657 11:33:46 -- common/autotest_common.sh@10 -- # set +x 00:07:18.657 ************************************ 00:07:18.657 START TEST accel_rpc 00:07:18.657 ************************************ 00:07:18.657 11:33:46 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:18.657 * Looking for test storage... 00:07:18.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:18.657 11:33:46 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:18.657 11:33:46 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1799980 00:07:18.657 11:33:46 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1799980 00:07:18.657 11:33:46 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1799980 ']' 00:07:18.657 11:33:46 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.657 11:33:46 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:18.657 11:33:46 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.657 11:33:46 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:18.657 11:33:46 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.657 11:33:46 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:18.918 [2024-07-15 11:33:46.762345] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:07:18.918 [2024-07-15 11:33:46.762403] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1799980 ] 00:07:18.918 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.918 [2024-07-15 11:33:46.832667] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.918 [2024-07-15 11:33:46.907151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.486 11:33:47 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:19.486 11:33:47 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:19.486 11:33:47 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:19.486 11:33:47 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:19.486 11:33:47 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:19.487 11:33:47 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:19.487 11:33:47 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:19.487 11:33:47 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:19.487 11:33:47 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.487 11:33:47 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.487 ************************************ 00:07:19.487 START TEST accel_assign_opcode 00:07:19.487 ************************************ 00:07:19.487 11:33:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:19.487 11:33:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:19.487 11:33:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.487 11:33:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:19.487 [2024-07-15 11:33:47.561105] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:19.487 11:33:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.487 11:33:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:19.487 11:33:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.487 11:33:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:19.487 [2024-07-15 11:33:47.569118] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:19.487 11:33:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.487 11:33:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:19.487 11:33:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.487 11:33:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:19.746 11:33:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.746 11:33:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:19.746 11:33:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:19.746 11:33:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:19.746 11:33:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:19.746 11:33:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:19.746 11:33:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:19.746 software 00:07:19.746 00:07:19.746 real 0m0.234s 00:07:19.746 user 0m0.047s 00:07:19.746 sys 0m0.007s 00:07:19.746 11:33:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.746 11:33:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:19.746 ************************************ 00:07:19.746 END TEST accel_assign_opcode 00:07:19.746 ************************************ 00:07:19.746 11:33:47 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:19.746 11:33:47 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1799980 00:07:19.746 11:33:47 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1799980 ']' 00:07:19.746 11:33:47 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1799980 00:07:19.746 11:33:47 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:19.746 11:33:47 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:19.746 11:33:47 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1799980 00:07:20.005 11:33:47 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:20.005 11:33:47 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:20.005 11:33:47 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1799980' 00:07:20.005 killing process with pid 1799980 00:07:20.005 11:33:47 accel_rpc -- common/autotest_common.sh@967 -- # kill 1799980 00:07:20.005 11:33:47 accel_rpc -- common/autotest_common.sh@972 -- # wait 1799980 00:07:20.265 00:07:20.265 real 0m1.568s 00:07:20.265 user 0m1.585s 00:07:20.265 sys 0m0.465s 00:07:20.265 11:33:48 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.265 11:33:48 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.265 ************************************ 00:07:20.265 END TEST accel_rpc 00:07:20.265 ************************************ 00:07:20.265 11:33:48 -- common/autotest_common.sh@1142 -- # return 0 00:07:20.265 11:33:48 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:20.265 11:33:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:20.265 11:33:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.265 11:33:48 -- common/autotest_common.sh@10 -- # set +x 00:07:20.265 ************************************ 00:07:20.265 START TEST app_cmdline 00:07:20.265 ************************************ 00:07:20.265 11:33:48 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:20.524 * Looking for test storage... 00:07:20.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:20.524 11:33:48 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:20.524 11:33:48 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1800313 00:07:20.524 11:33:48 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1800313 00:07:20.524 11:33:48 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:20.524 11:33:48 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1800313 ']' 00:07:20.524 11:33:48 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.524 11:33:48 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.524 11:33:48 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.524 11:33:48 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.524 11:33:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:20.524 [2024-07-15 11:33:48.435263] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:07:20.524 [2024-07-15 11:33:48.435314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1800313 ] 00:07:20.524 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.524 [2024-07-15 11:33:48.505115] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.524 [2024-07-15 11:33:48.577514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.463 11:33:49 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.463 11:33:49 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:21.463 11:33:49 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:21.463 { 00:07:21.463 "version": "SPDK v24.09-pre git sha1 62a72093c", 00:07:21.463 "fields": { 00:07:21.463 "major": 24, 00:07:21.463 "minor": 9, 00:07:21.463 "patch": 0, 00:07:21.463 "suffix": "-pre", 00:07:21.463 "commit": "62a72093c" 00:07:21.463 } 00:07:21.463 } 00:07:21.463 11:33:49 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:21.463 11:33:49 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:21.463 11:33:49 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:21.463 11:33:49 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:21.463 11:33:49 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:21.463 11:33:49 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.463 11:33:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:21.463 11:33:49 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:21.463 11:33:49 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:21.463 11:33:49 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.463 11:33:49 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:21.463 11:33:49 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:21.463 11:33:49 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:21.463 11:33:49 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:21.463 11:33:49 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:21.463 11:33:49 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.463 11:33:49 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.463 11:33:49 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.463 11:33:49 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.463 11:33:49 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.463 11:33:49 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:21.463 11:33:49 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.463 11:33:49 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:21.463 11:33:49 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:21.723 request: 00:07:21.723 { 00:07:21.723 "method": "env_dpdk_get_mem_stats", 00:07:21.723 "req_id": 1 00:07:21.723 } 00:07:21.723 Got JSON-RPC error response 00:07:21.723 response: 00:07:21.723 { 00:07:21.723 "code": -32601, 00:07:21.723 "message": "Method not found" 00:07:21.723 } 00:07:21.723 11:33:49 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:21.723 11:33:49 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:21.723 11:33:49 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:21.723 11:33:49 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:21.723 11:33:49 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1800313 00:07:21.723 11:33:49 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1800313 ']' 00:07:21.723 11:33:49 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1800313 00:07:21.723 11:33:49 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:21.723 11:33:49 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:21.723 11:33:49 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1800313 00:07:21.723 11:33:49 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:21.723 11:33:49 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:21.723 11:33:49 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1800313' 00:07:21.723 killing process with pid 1800313 00:07:21.723 11:33:49 app_cmdline -- common/autotest_common.sh@967 -- # kill 1800313 00:07:21.723 11:33:49 app_cmdline -- common/autotest_common.sh@972 -- # wait 1800313 00:07:21.982 00:07:21.982 real 0m1.694s 00:07:21.982 user 0m1.957s 00:07:21.982 sys 0m0.499s 00:07:21.982 11:33:49 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.982 11:33:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:21.982 ************************************ 00:07:21.982 END TEST app_cmdline 00:07:21.982 ************************************ 00:07:21.982 11:33:50 -- common/autotest_common.sh@1142 -- # return 0 00:07:21.982 11:33:50 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:21.982 11:33:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.982 11:33:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.982 11:33:50 -- common/autotest_common.sh@10 -- # set +x 00:07:21.982 ************************************ 00:07:21.982 START TEST version 00:07:21.982 ************************************ 00:07:21.982 11:33:50 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:22.242 * Looking for test storage... 00:07:22.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:22.242 11:33:50 version -- app/version.sh@17 -- # get_header_version major 00:07:22.242 11:33:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:22.242 11:33:50 version -- app/version.sh@14 -- # cut -f2 00:07:22.242 11:33:50 version -- app/version.sh@14 -- # tr -d '"' 00:07:22.242 11:33:50 version -- app/version.sh@17 -- # major=24 00:07:22.242 11:33:50 version -- app/version.sh@18 -- # get_header_version minor 00:07:22.242 11:33:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:22.242 11:33:50 version -- app/version.sh@14 -- # cut -f2 00:07:22.242 11:33:50 version -- app/version.sh@14 -- # tr -d '"' 00:07:22.242 11:33:50 version -- app/version.sh@18 -- # minor=9 00:07:22.242 11:33:50 version -- app/version.sh@19 -- # get_header_version patch 00:07:22.242 11:33:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:22.242 11:33:50 version -- app/version.sh@14 -- # cut -f2 00:07:22.242 11:33:50 version -- app/version.sh@14 -- # tr -d '"' 00:07:22.242 11:33:50 version -- app/version.sh@19 -- # patch=0 00:07:22.242 11:33:50 version -- app/version.sh@20 -- # get_header_version suffix 00:07:22.242 11:33:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:22.242 11:33:50 version -- app/version.sh@14 -- # cut -f2 00:07:22.242 11:33:50 version -- app/version.sh@14 -- # tr -d '"' 00:07:22.242 11:33:50 version -- app/version.sh@20 -- # suffix=-pre 00:07:22.242 11:33:50 version -- app/version.sh@22 -- # version=24.9 00:07:22.242 11:33:50 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:22.242 11:33:50 version -- app/version.sh@28 -- # version=24.9rc0 00:07:22.242 11:33:50 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:22.242 11:33:50 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:22.242 11:33:50 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:22.242 11:33:50 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:22.242 00:07:22.242 real 0m0.172s 00:07:22.242 user 0m0.088s 00:07:22.242 sys 0m0.129s 00:07:22.242 11:33:50 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.242 11:33:50 version -- common/autotest_common.sh@10 -- # set +x 00:07:22.242 ************************************ 00:07:22.242 END TEST version 00:07:22.242 ************************************ 00:07:22.242 11:33:50 -- common/autotest_common.sh@1142 -- # return 0 00:07:22.242 11:33:50 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:22.242 11:33:50 -- spdk/autotest.sh@198 -- # uname -s 00:07:22.242 11:33:50 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:22.242 11:33:50 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:22.242 11:33:50 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:22.242 11:33:50 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:22.242 11:33:50 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:22.242 11:33:50 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:22.242 11:33:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:22.242 11:33:50 -- common/autotest_common.sh@10 -- # set +x 00:07:22.242 11:33:50 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:22.242 11:33:50 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:22.242 11:33:50 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:22.242 11:33:50 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:22.242 11:33:50 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:22.242 11:33:50 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:22.242 11:33:50 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:22.242 11:33:50 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:22.242 11:33:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.242 11:33:50 -- common/autotest_common.sh@10 -- # set +x 00:07:22.242 ************************************ 00:07:22.242 START TEST nvmf_tcp 00:07:22.242 ************************************ 00:07:22.242 11:33:50 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:22.502 * Looking for test storage... 00:07:22.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:22.502 11:33:50 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:22.502 11:33:50 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:22.502 11:33:50 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.503 11:33:50 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.503 11:33:50 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.503 11:33:50 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.503 11:33:50 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.503 11:33:50 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.503 11:33:50 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.503 11:33:50 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:22.503 11:33:50 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:22.503 11:33:50 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:22.503 11:33:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:22.503 11:33:50 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:22.503 11:33:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:22.503 11:33:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.503 11:33:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:22.503 ************************************ 00:07:22.503 START TEST nvmf_example 00:07:22.503 ************************************ 00:07:22.503 11:33:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:22.503 * Looking for test storage... 00:07:22.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.503 11:33:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:22.503 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:22.763 11:33:50 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:29.401 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:29.401 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:29.402 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:29.402 Found net devices under 0000:af:00.0: cvl_0_0 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:29.402 Found net devices under 0000:af:00.1: cvl_0_1 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:29.402 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:29.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:29.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:07:29.661 00:07:29.661 --- 10.0.0.2 ping statistics --- 00:07:29.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.661 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:29.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:29.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:07:29.661 00:07:29.661 --- 10.0.0.1 ping statistics --- 00:07:29.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:29.661 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1804098 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1804098 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1804098 ']' 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.661 11:33:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.661 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:30.596 11:33:58 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:30.596 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.808 Initializing NVMe Controllers 00:07:42.808 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:42.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:42.808 Initialization complete. Launching workers. 00:07:42.808 ======================================================== 00:07:42.808 Latency(us) 00:07:42.808 Device Information : IOPS MiB/s Average min max 00:07:42.808 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16850.76 65.82 3797.96 678.57 15480.94 00:07:42.808 ======================================================== 00:07:42.808 Total : 16850.76 65.82 3797.96 678.57 15480.94 00:07:42.808 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:42.808 rmmod nvme_tcp 00:07:42.808 rmmod nvme_fabrics 00:07:42.808 rmmod nvme_keyring 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1804098 ']' 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1804098 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1804098 ']' 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1804098 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1804098 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1804098' 00:07:42.808 killing process with pid 1804098 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1804098 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1804098 00:07:42.808 nvmf threads initialize successfully 00:07:42.808 bdev subsystem init successfully 00:07:42.808 created a nvmf target service 00:07:42.808 create targets's poll groups done 00:07:42.808 all subsystems of target started 00:07:42.808 nvmf target is running 00:07:42.808 all subsystems of target stopped 00:07:42.808 destroy targets's poll groups done 00:07:42.808 destroyed the nvmf target service 00:07:42.808 bdev subsystem finish successfully 00:07:42.808 nvmf threads destroy successfully 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:42.808 11:34:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.067 11:34:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:43.067 11:34:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:43.067 11:34:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:43.067 11:34:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:43.067 00:07:43.067 real 0m20.612s 00:07:43.067 user 0m45.190s 00:07:43.067 sys 0m7.435s 00:07:43.067 11:34:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.067 11:34:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:43.067 ************************************ 00:07:43.067 END TEST nvmf_example 00:07:43.067 ************************************ 00:07:43.067 11:34:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:43.067 11:34:11 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:43.067 11:34:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:43.067 11:34:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.067 11:34:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:43.329 ************************************ 00:07:43.329 START TEST nvmf_filesystem 00:07:43.329 ************************************ 00:07:43.329 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:43.329 * Looking for test storage... 00:07:43.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.329 11:34:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:43.329 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:43.329 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:43.329 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:43.329 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:43.329 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:43.329 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:43.329 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:43.329 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:43.329 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:43.329 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:43.329 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:43.329 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:43.329 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:43.329 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:43.329 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:43.329 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:43.329 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:43.329 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:43.329 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:43.329 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:43.330 11:34:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:43.330 #define SPDK_CONFIG_H 00:07:43.330 #define SPDK_CONFIG_APPS 1 00:07:43.330 #define SPDK_CONFIG_ARCH native 00:07:43.330 #undef SPDK_CONFIG_ASAN 00:07:43.330 #undef SPDK_CONFIG_AVAHI 00:07:43.330 #undef SPDK_CONFIG_CET 00:07:43.330 #define SPDK_CONFIG_COVERAGE 1 00:07:43.330 #define SPDK_CONFIG_CROSS_PREFIX 00:07:43.330 #undef SPDK_CONFIG_CRYPTO 00:07:43.330 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:43.330 #undef SPDK_CONFIG_CUSTOMOCF 00:07:43.330 #undef SPDK_CONFIG_DAOS 00:07:43.330 #define SPDK_CONFIG_DAOS_DIR 00:07:43.330 #define SPDK_CONFIG_DEBUG 1 00:07:43.330 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:43.330 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:43.330 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:43.330 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:43.330 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:43.330 #undef SPDK_CONFIG_DPDK_UADK 00:07:43.330 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:43.330 #define SPDK_CONFIG_EXAMPLES 1 00:07:43.330 #undef SPDK_CONFIG_FC 00:07:43.330 #define SPDK_CONFIG_FC_PATH 00:07:43.330 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:43.330 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:43.330 #undef SPDK_CONFIG_FUSE 00:07:43.330 #undef SPDK_CONFIG_FUZZER 00:07:43.330 #define SPDK_CONFIG_FUZZER_LIB 00:07:43.330 #undef SPDK_CONFIG_GOLANG 00:07:43.330 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:43.330 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:43.330 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:43.330 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:43.330 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:43.330 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:43.330 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:43.330 #define SPDK_CONFIG_IDXD 1 00:07:43.330 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:43.330 #undef SPDK_CONFIG_IPSEC_MB 00:07:43.330 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:43.330 #define SPDK_CONFIG_ISAL 1 00:07:43.330 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:43.330 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:43.330 #define SPDK_CONFIG_LIBDIR 00:07:43.330 #undef SPDK_CONFIG_LTO 00:07:43.330 #define SPDK_CONFIG_MAX_LCORES 128 00:07:43.330 #define SPDK_CONFIG_NVME_CUSE 1 00:07:43.330 #undef SPDK_CONFIG_OCF 00:07:43.330 #define SPDK_CONFIG_OCF_PATH 00:07:43.330 #define SPDK_CONFIG_OPENSSL_PATH 00:07:43.330 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:43.330 #define SPDK_CONFIG_PGO_DIR 00:07:43.330 #undef SPDK_CONFIG_PGO_USE 00:07:43.330 #define SPDK_CONFIG_PREFIX /usr/local 00:07:43.330 #undef SPDK_CONFIG_RAID5F 00:07:43.330 #undef SPDK_CONFIG_RBD 00:07:43.330 #define SPDK_CONFIG_RDMA 1 00:07:43.330 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:43.330 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:43.330 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:43.330 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:43.330 #define SPDK_CONFIG_SHARED 1 00:07:43.330 #undef SPDK_CONFIG_SMA 00:07:43.330 #define SPDK_CONFIG_TESTS 1 00:07:43.330 #undef SPDK_CONFIG_TSAN 00:07:43.330 #define SPDK_CONFIG_UBLK 1 00:07:43.330 #define SPDK_CONFIG_UBSAN 1 00:07:43.330 #undef SPDK_CONFIG_UNIT_TESTS 00:07:43.330 #undef SPDK_CONFIG_URING 00:07:43.330 #define SPDK_CONFIG_URING_PATH 00:07:43.330 #undef SPDK_CONFIG_URING_ZNS 00:07:43.330 #undef SPDK_CONFIG_USDT 00:07:43.331 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:43.331 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:43.331 #define SPDK_CONFIG_VFIO_USER 1 00:07:43.331 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:43.331 #define SPDK_CONFIG_VHOST 1 00:07:43.331 #define SPDK_CONFIG_VIRTIO 1 00:07:43.331 #undef SPDK_CONFIG_VTUNE 00:07:43.331 #define SPDK_CONFIG_VTUNE_DIR 00:07:43.331 #define SPDK_CONFIG_WERROR 1 00:07:43.331 #define SPDK_CONFIG_WPDK_DIR 00:07:43.331 #undef SPDK_CONFIG_XNVME 00:07:43.331 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:43.331 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:43.332 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j112 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1806585 ]] 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1806585 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.7Zv2p8 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.7Zv2p8/tests/target /tmp/spdk.7Zv2p8 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=955215872 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4329213952 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=55270850560 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61742325760 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6471475200 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:43.333 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30867787776 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871162880 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3375104 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12339081216 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12348465152 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9383936 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30870372352 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30871162880 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=790528 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6174228480 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6174232576 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:43.593 * Looking for test storage... 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=55270850560 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:43.593 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8686067712 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:43.594 11:34:11 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.164 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.164 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:50.164 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:50.164 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:50.164 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:50.164 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:50.165 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:50.165 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:50.165 Found net devices under 0000:af:00.0: cvl_0_0 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:50.165 Found net devices under 0000:af:00.1: cvl_0_1 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:50.165 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:50.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:07:50.425 00:07:50.425 --- 10.0.0.2 ping statistics --- 00:07:50.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.425 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:50.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:07:50.425 00:07:50.425 --- 10.0.0.1 ping statistics --- 00:07:50.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.425 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:50.425 ************************************ 00:07:50.425 START TEST nvmf_filesystem_no_in_capsule 00:07:50.425 ************************************ 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1809706 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1809706 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1809706 ']' 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.425 11:34:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:50.425 [2024-07-15 11:34:18.519433] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:07:50.425 [2024-07-15 11:34:18.519475] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.683 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.683 [2024-07-15 11:34:18.595875] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.683 [2024-07-15 11:34:18.676819] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.683 [2024-07-15 11:34:18.676860] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.683 [2024-07-15 11:34:18.676871] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:50.683 [2024-07-15 11:34:18.676880] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:50.684 [2024-07-15 11:34:18.676887] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.684 [2024-07-15 11:34:18.676942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.684 [2024-07-15 11:34:18.677038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.684 [2024-07-15 11:34:18.677136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.684 [2024-07-15 11:34:18.677137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.247 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.247 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:51.247 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:51.247 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:51.247 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.505 [2024-07-15 11:34:19.361785] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.505 Malloc1 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.505 [2024-07-15 11:34:19.511936] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:51.505 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:51.506 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:51.506 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:51.506 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:51.506 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:51.506 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.506 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:51.506 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.506 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:51.506 { 00:07:51.506 "name": "Malloc1", 00:07:51.506 "aliases": [ 00:07:51.506 "81694979-d7e8-4bab-84ae-e0c0a0d9568c" 00:07:51.506 ], 00:07:51.506 "product_name": "Malloc disk", 00:07:51.506 "block_size": 512, 00:07:51.506 "num_blocks": 1048576, 00:07:51.506 "uuid": "81694979-d7e8-4bab-84ae-e0c0a0d9568c", 00:07:51.506 "assigned_rate_limits": { 00:07:51.506 "rw_ios_per_sec": 0, 00:07:51.506 "rw_mbytes_per_sec": 0, 00:07:51.506 "r_mbytes_per_sec": 0, 00:07:51.506 "w_mbytes_per_sec": 0 00:07:51.506 }, 00:07:51.506 "claimed": true, 00:07:51.506 "claim_type": "exclusive_write", 00:07:51.506 "zoned": false, 00:07:51.506 "supported_io_types": { 00:07:51.506 "read": true, 00:07:51.506 "write": true, 00:07:51.506 "unmap": true, 00:07:51.506 "flush": true, 00:07:51.506 "reset": true, 00:07:51.506 "nvme_admin": false, 00:07:51.506 "nvme_io": false, 00:07:51.506 "nvme_io_md": false, 00:07:51.506 "write_zeroes": true, 00:07:51.506 "zcopy": true, 00:07:51.506 "get_zone_info": false, 00:07:51.506 "zone_management": false, 00:07:51.506 "zone_append": false, 00:07:51.506 "compare": false, 00:07:51.506 "compare_and_write": false, 00:07:51.506 "abort": true, 00:07:51.506 "seek_hole": false, 00:07:51.506 "seek_data": false, 00:07:51.506 "copy": true, 00:07:51.506 "nvme_iov_md": false 00:07:51.506 }, 00:07:51.506 "memory_domains": [ 00:07:51.506 { 00:07:51.506 "dma_device_id": "system", 00:07:51.506 "dma_device_type": 1 00:07:51.506 }, 00:07:51.506 { 00:07:51.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.506 "dma_device_type": 2 00:07:51.506 } 00:07:51.506 ], 00:07:51.506 "driver_specific": {} 00:07:51.506 } 00:07:51.506 ]' 00:07:51.506 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:51.506 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:51.506 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:51.763 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:51.763 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:51.763 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:51.763 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:51.763 11:34:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:53.137 11:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:53.137 11:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:53.137 11:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:53.137 11:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:53.137 11:34:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:55.038 11:34:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:55.038 11:34:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:55.038 11:34:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:55.038 11:34:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:55.038 11:34:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:55.038 11:34:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:55.038 11:34:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:55.038 11:34:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:55.038 11:34:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:55.038 11:34:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:55.038 11:34:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:55.039 11:34:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:55.039 11:34:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:55.039 11:34:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:55.039 11:34:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:55.039 11:34:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:55.039 11:34:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:55.039 11:34:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:55.297 11:34:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:56.234 11:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:56.234 11:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:56.234 11:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:56.234 11:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.234 11:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.234 ************************************ 00:07:56.234 START TEST filesystem_ext4 00:07:56.234 ************************************ 00:07:56.234 11:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:56.234 11:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:56.234 11:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:56.234 11:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:56.234 11:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:56.234 11:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:56.234 11:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:56.234 11:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:56.234 11:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:56.234 11:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:56.234 11:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:56.234 mke2fs 1.46.5 (30-Dec-2021) 00:07:56.494 Discarding device blocks: 0/522240 done 00:07:56.494 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:56.494 Filesystem UUID: 4630e4ad-619c-4b43-93fa-e1066610cef3 00:07:56.494 Superblock backups stored on blocks: 00:07:56.494 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:56.494 00:07:56.494 Allocating group tables: 0/64 done 00:07:56.494 Writing inode tables: 0/64 done 00:07:56.494 Creating journal (8192 blocks): done 00:07:56.494 Writing superblocks and filesystem accounting information: 0/64 done 00:07:56.494 00:07:56.494 11:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:56.494 11:34:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1809706 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:57.430 00:07:57.430 real 0m1.073s 00:07:57.430 user 0m0.027s 00:07:57.430 sys 0m0.081s 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:57.430 ************************************ 00:07:57.430 END TEST filesystem_ext4 00:07:57.430 ************************************ 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:57.430 ************************************ 00:07:57.430 START TEST filesystem_btrfs 00:07:57.430 ************************************ 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:57.430 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:57.689 btrfs-progs v6.6.2 00:07:57.689 See https://btrfs.readthedocs.io for more information. 00:07:57.689 00:07:57.689 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:57.689 NOTE: several default settings have changed in version 5.15, please make sure 00:07:57.689 this does not affect your deployments: 00:07:57.689 - DUP for metadata (-m dup) 00:07:57.689 - enabled no-holes (-O no-holes) 00:07:57.689 - enabled free-space-tree (-R free-space-tree) 00:07:57.689 00:07:57.689 Label: (null) 00:07:57.689 UUID: a11065d2-7c23-4f8b-a376-76bf60d3b5f3 00:07:57.689 Node size: 16384 00:07:57.689 Sector size: 4096 00:07:57.689 Filesystem size: 510.00MiB 00:07:57.689 Block group profiles: 00:07:57.689 Data: single 8.00MiB 00:07:57.689 Metadata: DUP 32.00MiB 00:07:57.689 System: DUP 8.00MiB 00:07:57.689 SSD detected: yes 00:07:57.689 Zoned device: no 00:07:57.689 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:57.689 Runtime features: free-space-tree 00:07:57.689 Checksum: crc32c 00:07:57.689 Number of devices: 1 00:07:57.689 Devices: 00:07:57.689 ID SIZE PATH 00:07:57.689 1 510.00MiB /dev/nvme0n1p1 00:07:57.689 00:07:57.689 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:57.689 11:34:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:58.626 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:58.626 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:58.626 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:58.626 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:58.626 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:58.626 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:58.626 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1809706 00:07:58.626 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:58.626 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:58.886 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:58.886 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:58.886 00:07:58.886 real 0m1.297s 00:07:58.886 user 0m0.036s 00:07:58.886 sys 0m0.135s 00:07:58.886 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.886 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:58.886 ************************************ 00:07:58.886 END TEST filesystem_btrfs 00:07:58.886 ************************************ 00:07:58.886 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:58.886 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:58.886 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:58.886 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.886 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.886 ************************************ 00:07:58.886 START TEST filesystem_xfs 00:07:58.886 ************************************ 00:07:58.886 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:58.886 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:58.886 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:58.886 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:58.886 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:58.886 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:58.886 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:58.886 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:58.886 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:58.886 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:58.886 11:34:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:58.886 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:58.886 = sectsz=512 attr=2, projid32bit=1 00:07:58.886 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:58.886 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:58.886 data = bsize=4096 blocks=130560, imaxpct=25 00:07:58.886 = sunit=0 swidth=0 blks 00:07:58.886 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:58.886 log =internal log bsize=4096 blocks=16384, version=2 00:07:58.886 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:58.886 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:59.836 Discarding blocks...Done. 00:07:59.836 11:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:59.836 11:34:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:02.432 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:02.432 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:02.433 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:02.433 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:02.433 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:02.433 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:02.433 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1809706 00:08:02.433 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:02.433 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:02.433 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:02.433 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:02.433 00:08:02.433 real 0m3.656s 00:08:02.433 user 0m0.029s 00:08:02.433 sys 0m0.084s 00:08:02.433 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.433 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:02.433 ************************************ 00:08:02.433 END TEST filesystem_xfs 00:08:02.433 ************************************ 00:08:02.433 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:02.433 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:02.692 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:02.692 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:02.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.692 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:02.692 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:02.692 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:02.692 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.952 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:02.952 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.952 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:02.952 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.952 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.952 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.952 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.952 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:02.952 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1809706 00:08:02.952 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1809706 ']' 00:08:02.952 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1809706 00:08:02.952 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:02.952 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:02.952 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1809706 00:08:02.952 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:02.952 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:02.952 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1809706' 00:08:02.952 killing process with pid 1809706 00:08:02.952 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1809706 00:08:02.952 11:34:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1809706 00:08:03.211 11:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:03.211 00:08:03.211 real 0m12.758s 00:08:03.211 user 0m49.728s 00:08:03.211 sys 0m1.768s 00:08:03.211 11:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.211 11:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.211 ************************************ 00:08:03.211 END TEST nvmf_filesystem_no_in_capsule 00:08:03.211 ************************************ 00:08:03.211 11:34:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:03.211 11:34:31 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:03.211 11:34:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:03.211 11:34:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.211 11:34:31 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.211 ************************************ 00:08:03.211 START TEST nvmf_filesystem_in_capsule 00:08:03.211 ************************************ 00:08:03.211 11:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:03.211 11:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:03.211 11:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:03.211 11:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:03.211 11:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:03.211 11:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.471 11:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1812168 00:08:03.471 11:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1812168 00:08:03.471 11:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:03.471 11:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1812168 ']' 00:08:03.471 11:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.471 11:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:03.471 11:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.471 11:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:03.471 11:34:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.471 [2024-07-15 11:34:31.371155] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:08:03.471 [2024-07-15 11:34:31.371198] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.471 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.471 [2024-07-15 11:34:31.445163] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.471 [2024-07-15 11:34:31.515798] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.471 [2024-07-15 11:34:31.515848] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.471 [2024-07-15 11:34:31.515857] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.471 [2024-07-15 11:34:31.515866] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.472 [2024-07-15 11:34:31.515873] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.472 [2024-07-15 11:34:31.515915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.472 [2024-07-15 11:34:31.516012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.472 [2024-07-15 11:34:31.516096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.472 [2024-07-15 11:34:31.516098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.410 [2024-07-15 11:34:32.218645] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.410 Malloc1 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.410 [2024-07-15 11:34:32.369583] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:04.410 { 00:08:04.410 "name": "Malloc1", 00:08:04.410 "aliases": [ 00:08:04.410 "7e3b7a7d-80e1-42f1-8562-3803c33a2d79" 00:08:04.410 ], 00:08:04.410 "product_name": "Malloc disk", 00:08:04.410 "block_size": 512, 00:08:04.410 "num_blocks": 1048576, 00:08:04.410 "uuid": "7e3b7a7d-80e1-42f1-8562-3803c33a2d79", 00:08:04.410 "assigned_rate_limits": { 00:08:04.410 "rw_ios_per_sec": 0, 00:08:04.410 "rw_mbytes_per_sec": 0, 00:08:04.410 "r_mbytes_per_sec": 0, 00:08:04.410 "w_mbytes_per_sec": 0 00:08:04.410 }, 00:08:04.410 "claimed": true, 00:08:04.410 "claim_type": "exclusive_write", 00:08:04.410 "zoned": false, 00:08:04.410 "supported_io_types": { 00:08:04.410 "read": true, 00:08:04.410 "write": true, 00:08:04.410 "unmap": true, 00:08:04.410 "flush": true, 00:08:04.410 "reset": true, 00:08:04.410 "nvme_admin": false, 00:08:04.410 "nvme_io": false, 00:08:04.410 "nvme_io_md": false, 00:08:04.410 "write_zeroes": true, 00:08:04.410 "zcopy": true, 00:08:04.410 "get_zone_info": false, 00:08:04.410 "zone_management": false, 00:08:04.410 "zone_append": false, 00:08:04.410 "compare": false, 00:08:04.410 "compare_and_write": false, 00:08:04.410 "abort": true, 00:08:04.410 "seek_hole": false, 00:08:04.410 "seek_data": false, 00:08:04.410 "copy": true, 00:08:04.410 "nvme_iov_md": false 00:08:04.410 }, 00:08:04.410 "memory_domains": [ 00:08:04.410 { 00:08:04.410 "dma_device_id": "system", 00:08:04.410 "dma_device_type": 1 00:08:04.410 }, 00:08:04.410 { 00:08:04.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.410 "dma_device_type": 2 00:08:04.410 } 00:08:04.410 ], 00:08:04.410 "driver_specific": {} 00:08:04.410 } 00:08:04.410 ]' 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:04.410 11:34:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:05.788 11:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:05.788 11:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:05.788 11:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:05.788 11:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:05.788 11:34:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:08.323 11:34:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:08.323 11:34:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:08.323 11:34:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:08.323 11:34:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:08.323 11:34:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:08.323 11:34:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:08.323 11:34:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:08.323 11:34:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:08.323 11:34:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:08.323 11:34:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:08.323 11:34:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:08.323 11:34:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:08.323 11:34:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:08.323 11:34:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:08.323 11:34:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:08.323 11:34:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:08.323 11:34:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:08.323 11:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:08.891 11:34:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:10.270 11:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:10.270 11:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:10.270 11:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:10.270 11:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.270 11:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.270 ************************************ 00:08:10.270 START TEST filesystem_in_capsule_ext4 00:08:10.270 ************************************ 00:08:10.270 11:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:10.270 11:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:10.270 11:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:10.270 11:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:10.270 11:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:10.270 11:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:10.270 11:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:10.270 11:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:10.270 11:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:10.270 11:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:10.270 11:34:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:10.270 mke2fs 1.46.5 (30-Dec-2021) 00:08:10.270 Discarding device blocks: 0/522240 done 00:08:10.270 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:10.270 Filesystem UUID: eb6d4a76-2251-4359-9bcf-3895a90cbedc 00:08:10.270 Superblock backups stored on blocks: 00:08:10.270 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:10.270 00:08:10.270 Allocating group tables: 0/64 done 00:08:10.270 Writing inode tables: 0/64 done 00:08:10.270 Creating journal (8192 blocks): done 00:08:10.270 Writing superblocks and filesystem accounting information: 0/64 done 00:08:10.270 00:08:10.270 11:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:10.270 11:34:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1812168 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:11.208 00:08:11.208 real 0m1.155s 00:08:11.208 user 0m0.037s 00:08:11.208 sys 0m0.067s 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:11.208 ************************************ 00:08:11.208 END TEST filesystem_in_capsule_ext4 00:08:11.208 ************************************ 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:11.208 ************************************ 00:08:11.208 START TEST filesystem_in_capsule_btrfs 00:08:11.208 ************************************ 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:11.208 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:11.777 btrfs-progs v6.6.2 00:08:11.777 See https://btrfs.readthedocs.io for more information. 00:08:11.777 00:08:11.777 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:11.777 NOTE: several default settings have changed in version 5.15, please make sure 00:08:11.777 this does not affect your deployments: 00:08:11.777 - DUP for metadata (-m dup) 00:08:11.777 - enabled no-holes (-O no-holes) 00:08:11.777 - enabled free-space-tree (-R free-space-tree) 00:08:11.777 00:08:11.777 Label: (null) 00:08:11.777 UUID: eb3c77a1-1b8a-4a4a-b3cc-56b94974f662 00:08:11.777 Node size: 16384 00:08:11.777 Sector size: 4096 00:08:11.777 Filesystem size: 510.00MiB 00:08:11.777 Block group profiles: 00:08:11.777 Data: single 8.00MiB 00:08:11.777 Metadata: DUP 32.00MiB 00:08:11.777 System: DUP 8.00MiB 00:08:11.777 SSD detected: yes 00:08:11.777 Zoned device: no 00:08:11.777 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:11.777 Runtime features: free-space-tree 00:08:11.777 Checksum: crc32c 00:08:11.777 Number of devices: 1 00:08:11.777 Devices: 00:08:11.777 ID SIZE PATH 00:08:11.777 1 510.00MiB /dev/nvme0n1p1 00:08:11.777 00:08:11.777 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:11.777 11:34:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:12.345 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:12.345 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:12.345 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:12.345 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:12.345 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:12.345 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:12.345 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1812168 00:08:12.345 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:12.345 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:12.345 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:12.345 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:12.345 00:08:12.345 real 0m1.077s 00:08:12.345 user 0m0.040s 00:08:12.345 sys 0m0.144s 00:08:12.345 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.345 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:12.345 ************************************ 00:08:12.345 END TEST filesystem_in_capsule_btrfs 00:08:12.345 ************************************ 00:08:12.345 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:12.345 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:12.345 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:12.345 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.345 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.345 ************************************ 00:08:12.345 START TEST filesystem_in_capsule_xfs 00:08:12.345 ************************************ 00:08:12.345 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:12.345 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:12.345 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:12.345 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:12.346 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:12.346 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:12.346 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:12.346 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:12.346 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:12.346 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:12.346 11:34:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:12.605 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:12.605 = sectsz=512 attr=2, projid32bit=1 00:08:12.605 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:12.605 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:12.605 data = bsize=4096 blocks=130560, imaxpct=25 00:08:12.605 = sunit=0 swidth=0 blks 00:08:12.605 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:12.605 log =internal log bsize=4096 blocks=16384, version=2 00:08:12.605 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:12.605 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:13.541 Discarding blocks...Done. 00:08:13.541 11:34:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:13.541 11:34:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:16.072 11:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:16.072 11:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:16.072 11:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:16.072 11:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:16.072 11:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:16.072 11:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:16.072 11:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1812168 00:08:16.072 11:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:16.072 11:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:16.072 11:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:16.072 11:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:16.072 00:08:16.072 real 0m3.537s 00:08:16.072 user 0m0.037s 00:08:16.072 sys 0m0.077s 00:08:16.072 11:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.072 11:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:16.072 ************************************ 00:08:16.072 END TEST filesystem_in_capsule_xfs 00:08:16.072 ************************************ 00:08:16.072 11:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:16.072 11:34:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:16.072 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:16.072 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:16.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.072 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:16.072 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:16.072 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:16.072 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:16.072 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:16.072 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:16.072 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:16.072 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:16.072 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.072 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.072 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.072 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:16.072 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1812168 00:08:16.072 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1812168 ']' 00:08:16.072 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1812168 00:08:16.072 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:16.072 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:16.331 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1812168 00:08:16.331 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:16.331 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:16.331 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1812168' 00:08:16.331 killing process with pid 1812168 00:08:16.331 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1812168 00:08:16.331 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1812168 00:08:16.589 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:16.589 00:08:16.589 real 0m13.257s 00:08:16.589 user 0m51.752s 00:08:16.589 sys 0m1.817s 00:08:16.589 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.589 11:34:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:16.589 ************************************ 00:08:16.589 END TEST nvmf_filesystem_in_capsule 00:08:16.590 ************************************ 00:08:16.590 11:34:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:16.590 11:34:44 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:16.590 11:34:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:16.590 11:34:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:16.590 11:34:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:16.590 11:34:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:16.590 11:34:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:16.590 11:34:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:16.590 rmmod nvme_tcp 00:08:16.590 rmmod nvme_fabrics 00:08:16.590 rmmod nvme_keyring 00:08:16.590 11:34:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:16.590 11:34:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:16.590 11:34:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:16.590 11:34:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:16.590 11:34:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:16.590 11:34:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:16.590 11:34:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:16.590 11:34:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:16.590 11:34:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:16.590 11:34:44 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.590 11:34:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.590 11:34:44 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.120 11:34:46 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:19.120 00:08:19.120 real 0m35.576s 00:08:19.120 user 1m43.609s 00:08:19.120 sys 0m9.086s 00:08:19.120 11:34:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.120 11:34:46 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:19.120 ************************************ 00:08:19.120 END TEST nvmf_filesystem 00:08:19.120 ************************************ 00:08:19.120 11:34:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:19.121 11:34:46 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:19.121 11:34:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:19.121 11:34:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.121 11:34:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:19.121 ************************************ 00:08:19.121 START TEST nvmf_target_discovery 00:08:19.121 ************************************ 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:19.121 * Looking for test storage... 00:08:19.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:19.121 11:34:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:25.690 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:25.690 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:25.690 Found net devices under 0000:af:00.0: cvl_0_0 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:25.690 Found net devices under 0000:af:00.1: cvl_0_1 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.690 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.691 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.691 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.691 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:25.691 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.691 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.691 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.691 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:25.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:08:25.691 00:08:25.691 --- 10.0.0.2 ping statistics --- 00:08:25.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.691 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:08:25.691 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:08:25.691 00:08:25.691 --- 10.0.0.1 ping statistics --- 00:08:25.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.691 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:08:25.691 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.691 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:25.691 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.691 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.691 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.691 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.691 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.691 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.691 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.950 11:34:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:25.950 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.950 11:34:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:25.950 11:34:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.950 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1818309 00:08:25.950 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:25.950 11:34:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1818309 00:08:25.950 11:34:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1818309 ']' 00:08:25.950 11:34:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.950 11:34:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.950 11:34:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.950 11:34:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.950 11:34:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:25.950 [2024-07-15 11:34:53.889797] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:08:25.950 [2024-07-15 11:34:53.889858] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.950 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.950 [2024-07-15 11:34:53.963048] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.950 [2024-07-15 11:34:54.036936] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.950 [2024-07-15 11:34:54.036973] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.950 [2024-07-15 11:34:54.036982] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.950 [2024-07-15 11:34:54.036990] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.950 [2024-07-15 11:34:54.036997] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.950 [2024-07-15 11:34:54.037056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.950 [2024-07-15 11:34:54.037153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.950 [2024-07-15 11:34:54.037214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.950 [2024-07-15 11:34:54.037216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.890 [2024-07-15 11:34:54.747644] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.890 Null1 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.890 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.891 [2024-07-15 11:34:54.799921] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.891 Null2 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.891 Null3 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.891 Null4 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.891 11:34:54 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:08:27.181 00:08:27.181 Discovery Log Number of Records 6, Generation counter 6 00:08:27.181 =====Discovery Log Entry 0====== 00:08:27.181 trtype: tcp 00:08:27.181 adrfam: ipv4 00:08:27.181 subtype: current discovery subsystem 00:08:27.181 treq: not required 00:08:27.181 portid: 0 00:08:27.181 trsvcid: 4420 00:08:27.181 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:27.181 traddr: 10.0.0.2 00:08:27.181 eflags: explicit discovery connections, duplicate discovery information 00:08:27.181 sectype: none 00:08:27.181 =====Discovery Log Entry 1====== 00:08:27.181 trtype: tcp 00:08:27.181 adrfam: ipv4 00:08:27.181 subtype: nvme subsystem 00:08:27.181 treq: not required 00:08:27.181 portid: 0 00:08:27.181 trsvcid: 4420 00:08:27.181 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:27.181 traddr: 10.0.0.2 00:08:27.181 eflags: none 00:08:27.181 sectype: none 00:08:27.181 =====Discovery Log Entry 2====== 00:08:27.181 trtype: tcp 00:08:27.181 adrfam: ipv4 00:08:27.181 subtype: nvme subsystem 00:08:27.181 treq: not required 00:08:27.181 portid: 0 00:08:27.181 trsvcid: 4420 00:08:27.181 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:27.182 traddr: 10.0.0.2 00:08:27.182 eflags: none 00:08:27.182 sectype: none 00:08:27.182 =====Discovery Log Entry 3====== 00:08:27.182 trtype: tcp 00:08:27.182 adrfam: ipv4 00:08:27.182 subtype: nvme subsystem 00:08:27.182 treq: not required 00:08:27.182 portid: 0 00:08:27.182 trsvcid: 4420 00:08:27.182 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:27.182 traddr: 10.0.0.2 00:08:27.182 eflags: none 00:08:27.182 sectype: none 00:08:27.182 =====Discovery Log Entry 4====== 00:08:27.182 trtype: tcp 00:08:27.182 adrfam: ipv4 00:08:27.182 subtype: nvme subsystem 00:08:27.182 treq: not required 00:08:27.182 portid: 0 00:08:27.182 trsvcid: 4420 00:08:27.182 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:27.182 traddr: 10.0.0.2 00:08:27.182 eflags: none 00:08:27.182 sectype: none 00:08:27.182 =====Discovery Log Entry 5====== 00:08:27.182 trtype: tcp 00:08:27.182 adrfam: ipv4 00:08:27.182 subtype: discovery subsystem referral 00:08:27.182 treq: not required 00:08:27.182 portid: 0 00:08:27.182 trsvcid: 4430 00:08:27.182 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:27.182 traddr: 10.0.0.2 00:08:27.182 eflags: none 00:08:27.182 sectype: none 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:27.182 Perform nvmf subsystem discovery via RPC 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:27.182 [ 00:08:27.182 { 00:08:27.182 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:27.182 "subtype": "Discovery", 00:08:27.182 "listen_addresses": [ 00:08:27.182 { 00:08:27.182 "trtype": "TCP", 00:08:27.182 "adrfam": "IPv4", 00:08:27.182 "traddr": "10.0.0.2", 00:08:27.182 "trsvcid": "4420" 00:08:27.182 } 00:08:27.182 ], 00:08:27.182 "allow_any_host": true, 00:08:27.182 "hosts": [] 00:08:27.182 }, 00:08:27.182 { 00:08:27.182 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:27.182 "subtype": "NVMe", 00:08:27.182 "listen_addresses": [ 00:08:27.182 { 00:08:27.182 "trtype": "TCP", 00:08:27.182 "adrfam": "IPv4", 00:08:27.182 "traddr": "10.0.0.2", 00:08:27.182 "trsvcid": "4420" 00:08:27.182 } 00:08:27.182 ], 00:08:27.182 "allow_any_host": true, 00:08:27.182 "hosts": [], 00:08:27.182 "serial_number": "SPDK00000000000001", 00:08:27.182 "model_number": "SPDK bdev Controller", 00:08:27.182 "max_namespaces": 32, 00:08:27.182 "min_cntlid": 1, 00:08:27.182 "max_cntlid": 65519, 00:08:27.182 "namespaces": [ 00:08:27.182 { 00:08:27.182 "nsid": 1, 00:08:27.182 "bdev_name": "Null1", 00:08:27.182 "name": "Null1", 00:08:27.182 "nguid": "33D3495CCFF348A681ACBDB15599AC23", 00:08:27.182 "uuid": "33d3495c-cff3-48a6-81ac-bdb15599ac23" 00:08:27.182 } 00:08:27.182 ] 00:08:27.182 }, 00:08:27.182 { 00:08:27.182 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:27.182 "subtype": "NVMe", 00:08:27.182 "listen_addresses": [ 00:08:27.182 { 00:08:27.182 "trtype": "TCP", 00:08:27.182 "adrfam": "IPv4", 00:08:27.182 "traddr": "10.0.0.2", 00:08:27.182 "trsvcid": "4420" 00:08:27.182 } 00:08:27.182 ], 00:08:27.182 "allow_any_host": true, 00:08:27.182 "hosts": [], 00:08:27.182 "serial_number": "SPDK00000000000002", 00:08:27.182 "model_number": "SPDK bdev Controller", 00:08:27.182 "max_namespaces": 32, 00:08:27.182 "min_cntlid": 1, 00:08:27.182 "max_cntlid": 65519, 00:08:27.182 "namespaces": [ 00:08:27.182 { 00:08:27.182 "nsid": 1, 00:08:27.182 "bdev_name": "Null2", 00:08:27.182 "name": "Null2", 00:08:27.182 "nguid": "4EAF9628B3D446D18B504570590D4F52", 00:08:27.182 "uuid": "4eaf9628-b3d4-46d1-8b50-4570590d4f52" 00:08:27.182 } 00:08:27.182 ] 00:08:27.182 }, 00:08:27.182 { 00:08:27.182 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:27.182 "subtype": "NVMe", 00:08:27.182 "listen_addresses": [ 00:08:27.182 { 00:08:27.182 "trtype": "TCP", 00:08:27.182 "adrfam": "IPv4", 00:08:27.182 "traddr": "10.0.0.2", 00:08:27.182 "trsvcid": "4420" 00:08:27.182 } 00:08:27.182 ], 00:08:27.182 "allow_any_host": true, 00:08:27.182 "hosts": [], 00:08:27.182 "serial_number": "SPDK00000000000003", 00:08:27.182 "model_number": "SPDK bdev Controller", 00:08:27.182 "max_namespaces": 32, 00:08:27.182 "min_cntlid": 1, 00:08:27.182 "max_cntlid": 65519, 00:08:27.182 "namespaces": [ 00:08:27.182 { 00:08:27.182 "nsid": 1, 00:08:27.182 "bdev_name": "Null3", 00:08:27.182 "name": "Null3", 00:08:27.182 "nguid": "C97B91E81B694291902F043F3129F860", 00:08:27.182 "uuid": "c97b91e8-1b69-4291-902f-043f3129f860" 00:08:27.182 } 00:08:27.182 ] 00:08:27.182 }, 00:08:27.182 { 00:08:27.182 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:27.182 "subtype": "NVMe", 00:08:27.182 "listen_addresses": [ 00:08:27.182 { 00:08:27.182 "trtype": "TCP", 00:08:27.182 "adrfam": "IPv4", 00:08:27.182 "traddr": "10.0.0.2", 00:08:27.182 "trsvcid": "4420" 00:08:27.182 } 00:08:27.182 ], 00:08:27.182 "allow_any_host": true, 00:08:27.182 "hosts": [], 00:08:27.182 "serial_number": "SPDK00000000000004", 00:08:27.182 "model_number": "SPDK bdev Controller", 00:08:27.182 "max_namespaces": 32, 00:08:27.182 "min_cntlid": 1, 00:08:27.182 "max_cntlid": 65519, 00:08:27.182 "namespaces": [ 00:08:27.182 { 00:08:27.182 "nsid": 1, 00:08:27.182 "bdev_name": "Null4", 00:08:27.182 "name": "Null4", 00:08:27.182 "nguid": "E84FFE06C1CB485B938D9D11D6CA4163", 00:08:27.182 "uuid": "e84ffe06-c1cb-485b-938d-9d11d6ca4163" 00:08:27.182 } 00:08:27.182 ] 00:08:27.182 } 00:08:27.182 ] 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:27.182 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:27.446 rmmod nvme_tcp 00:08:27.446 rmmod nvme_fabrics 00:08:27.446 rmmod nvme_keyring 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1818309 ']' 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1818309 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1818309 ']' 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1818309 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1818309 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1818309' 00:08:27.446 killing process with pid 1818309 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1818309 00:08:27.446 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1818309 00:08:27.705 11:34:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:27.705 11:34:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:27.705 11:34:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:27.705 11:34:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:27.705 11:34:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:27.705 11:34:55 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.705 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.705 11:34:55 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.608 11:34:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:29.608 00:08:29.608 real 0m10.852s 00:08:29.608 user 0m8.252s 00:08:29.608 sys 0m5.662s 00:08:29.608 11:34:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:29.608 11:34:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:29.608 ************************************ 00:08:29.608 END TEST nvmf_target_discovery 00:08:29.608 ************************************ 00:08:29.867 11:34:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:29.867 11:34:57 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:29.867 11:34:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:29.867 11:34:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.867 11:34:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:29.867 ************************************ 00:08:29.867 START TEST nvmf_referrals 00:08:29.867 ************************************ 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:29.867 * Looking for test storage... 00:08:29.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:29.867 11:34:57 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:36.427 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:36.427 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:36.427 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:36.428 Found net devices under 0000:af:00.0: cvl_0_0 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:36.428 Found net devices under 0000:af:00.1: cvl_0_1 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:36.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:08:36.428 00:08:36.428 --- 10.0.0.2 ping statistics --- 00:08:36.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.428 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:36.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:08:36.428 00:08:36.428 --- 10.0.0.1 ping statistics --- 00:08:36.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.428 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1822294 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1822294 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1822294 ']' 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:36.428 11:35:04 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.428 [2024-07-15 11:35:04.494014] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:08:36.428 [2024-07-15 11:35:04.494063] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.428 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.686 [2024-07-15 11:35:04.566836] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.686 [2024-07-15 11:35:04.635846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.686 [2024-07-15 11:35:04.635893] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.687 [2024-07-15 11:35:04.635902] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.687 [2024-07-15 11:35:04.635910] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.687 [2024-07-15 11:35:04.635917] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.687 [2024-07-15 11:35:04.636009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.687 [2024-07-15 11:35:04.636123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.687 [2024-07-15 11:35:04.636191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.687 [2024-07-15 11:35:04.636193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.253 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:37.253 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:37.253 11:35:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:37.253 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:37.253 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.253 11:35:05 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.253 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:37.253 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.253 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.253 [2024-07-15 11:35:05.345713] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.253 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.253 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:37.253 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.253 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.511 [2024-07-15 11:35:05.361947] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:37.511 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.511 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:37.511 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.511 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.511 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.511 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:37.511 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:37.512 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.770 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:38.029 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.029 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:38.029 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:38.029 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:38.029 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:38.029 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.029 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:38.029 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:38.029 11:35:05 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.029 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:38.029 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:38.029 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:38.029 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:38.029 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:38.029 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:38.029 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:38.029 11:35:05 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:38.029 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:38.029 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:38.029 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:38.029 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:38.029 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:38.029 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:38.029 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:38.287 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:38.287 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:38.287 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:38.287 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:38.287 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:38.287 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:38.287 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:38.287 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:38.287 11:35:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.287 11:35:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:38.287 11:35:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.287 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:38.287 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:38.287 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:38.287 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:38.287 11:35:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.287 11:35:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:38.287 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:38.287 11:35:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.288 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:38.288 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:38.288 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:38.288 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:38.288 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:38.288 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:38.288 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:38.288 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:38.546 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:38.547 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:38.547 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:38.547 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:38.547 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:38.547 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:38.547 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:38.547 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:38.547 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:38.547 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:38.547 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:38.547 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:38.547 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:38.804 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:38.804 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:38.804 11:35:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.804 11:35:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:38.804 11:35:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.804 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:38.804 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:38.804 11:35:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.804 11:35:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:38.804 11:35:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.804 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:38.804 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:38.804 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:38.804 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:38.804 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:38.804 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:38.804 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:39.082 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:39.082 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:39.082 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:39.082 11:35:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:39.082 11:35:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:39.082 11:35:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:39.082 11:35:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:39.082 11:35:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:39.082 11:35:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:39.082 11:35:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:39.082 rmmod nvme_tcp 00:08:39.082 rmmod nvme_fabrics 00:08:39.082 rmmod nvme_keyring 00:08:39.082 11:35:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:39.082 11:35:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:39.082 11:35:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:39.082 11:35:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1822294 ']' 00:08:39.082 11:35:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1822294 00:08:39.082 11:35:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1822294 ']' 00:08:39.082 11:35:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1822294 00:08:39.082 11:35:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:39.082 11:35:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:39.082 11:35:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1822294 00:08:39.082 11:35:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:39.083 11:35:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:39.083 11:35:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1822294' 00:08:39.083 killing process with pid 1822294 00:08:39.083 11:35:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1822294 00:08:39.083 11:35:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1822294 00:08:39.341 11:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:39.341 11:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:39.341 11:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:39.341 11:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:39.341 11:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:39.341 11:35:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.341 11:35:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:39.341 11:35:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.245 11:35:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:41.245 00:08:41.245 real 0m11.551s 00:08:41.245 user 0m12.803s 00:08:41.245 sys 0m5.801s 00:08:41.245 11:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:41.245 11:35:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:41.245 ************************************ 00:08:41.245 END TEST nvmf_referrals 00:08:41.245 ************************************ 00:08:41.505 11:35:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:41.505 11:35:09 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:41.505 11:35:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:41.505 11:35:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.505 11:35:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:41.505 ************************************ 00:08:41.505 START TEST nvmf_connect_disconnect 00:08:41.505 ************************************ 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:41.505 * Looking for test storage... 00:08:41.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:41.505 11:35:09 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:49.628 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:49.628 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:49.628 Found net devices under 0000:af:00.0: cvl_0_0 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:49.628 Found net devices under 0000:af:00.1: cvl_0_1 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.628 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:49.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:08:49.629 00:08:49.629 --- 10.0.0.2 ping statistics --- 00:08:49.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.629 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:08:49.629 00:08:49.629 --- 10.0.0.1 ping statistics --- 00:08:49.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.629 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1826553 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1826553 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1826553 ']' 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:49.629 11:35:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.629 [2024-07-15 11:35:16.610117] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:08:49.629 [2024-07-15 11:35:16.610160] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.629 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.629 [2024-07-15 11:35:16.684760] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.629 [2024-07-15 11:35:16.757493] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.629 [2024-07-15 11:35:16.757534] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.629 [2024-07-15 11:35:16.757543] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.629 [2024-07-15 11:35:16.757551] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.629 [2024-07-15 11:35:16.757558] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.629 [2024-07-15 11:35:16.757687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.629 [2024-07-15 11:35:16.757783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.629 [2024-07-15 11:35:16.757877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.629 [2024-07-15 11:35:16.757881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.629 [2024-07-15 11:35:17.464727] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.629 [2024-07-15 11:35:17.519532] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:49.629 11:35:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:52.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:06.998 rmmod nvme_tcp 00:09:06.998 rmmod nvme_fabrics 00:09:06.998 rmmod nvme_keyring 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1826553 ']' 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1826553 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1826553 ']' 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1826553 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1826553 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1826553' 00:09:06.998 killing process with pid 1826553 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1826553 00:09:06.998 11:35:34 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1826553 00:09:07.257 11:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:07.257 11:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:07.257 11:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:07.257 11:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:07.257 11:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:07.257 11:35:35 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.257 11:35:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:07.257 11:35:35 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.160 11:35:37 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:09.160 00:09:09.160 real 0m27.797s 00:09:09.160 user 1m14.227s 00:09:09.160 sys 0m7.393s 00:09:09.160 11:35:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.160 11:35:37 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:09.160 ************************************ 00:09:09.160 END TEST nvmf_connect_disconnect 00:09:09.160 ************************************ 00:09:09.160 11:35:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:09.160 11:35:37 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:09.160 11:35:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:09.160 11:35:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.160 11:35:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:09.419 ************************************ 00:09:09.419 START TEST nvmf_multitarget 00:09:09.419 ************************************ 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:09.419 * Looking for test storage... 00:09:09.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:09.419 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:09.420 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:09.420 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:09.420 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:09.420 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:09.420 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:09.420 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:09.420 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:09.420 11:35:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:09.420 11:35:37 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:09.420 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:09.420 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:09.420 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:09.420 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:09.420 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:09.420 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:09.420 11:35:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:09.420 11:35:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.420 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:09.420 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:09.420 11:35:37 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:09.420 11:35:37 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:15.984 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:15.984 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:15.984 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:15.984 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:15.984 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:15.984 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:15.984 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:15.984 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:15.984 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:15.985 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:15.985 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:15.985 Found net devices under 0000:af:00.0: cvl_0_0 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:15.985 Found net devices under 0000:af:00.1: cvl_0_1 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:15.985 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:16.243 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:16.243 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:16.243 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:16.243 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:16.243 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:16.243 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:16.243 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:16.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:16.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:09:16.501 00:09:16.501 --- 10.0.0.2 ping statistics --- 00:09:16.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.501 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:16.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:16.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:09:16.501 00:09:16.501 --- 10.0.0.1 ping statistics --- 00:09:16.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.501 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1833496 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1833496 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1833496 ']' 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:16.501 11:35:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:16.501 [2024-07-15 11:35:44.492251] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:09:16.501 [2024-07-15 11:35:44.492294] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:16.501 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.501 [2024-07-15 11:35:44.566004] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.759 [2024-07-15 11:35:44.637120] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:16.759 [2024-07-15 11:35:44.637160] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:16.759 [2024-07-15 11:35:44.637169] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:16.759 [2024-07-15 11:35:44.637177] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:16.759 [2024-07-15 11:35:44.637184] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:16.759 [2024-07-15 11:35:44.637236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.759 [2024-07-15 11:35:44.637331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.760 [2024-07-15 11:35:44.637414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.760 [2024-07-15 11:35:44.637415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.325 11:35:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:17.325 11:35:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:09:17.325 11:35:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:17.325 11:35:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:17.325 11:35:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:17.325 11:35:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.325 11:35:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:17.325 11:35:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:17.325 11:35:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:17.583 11:35:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:17.583 11:35:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:17.583 "nvmf_tgt_1" 00:09:17.583 11:35:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:17.583 "nvmf_tgt_2" 00:09:17.583 11:35:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:17.583 11:35:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:17.842 11:35:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:17.842 11:35:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:17.842 true 00:09:17.842 11:35:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:17.842 true 00:09:18.101 11:35:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:18.101 11:35:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:18.101 rmmod nvme_tcp 00:09:18.101 rmmod nvme_fabrics 00:09:18.101 rmmod nvme_keyring 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1833496 ']' 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1833496 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1833496 ']' 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1833496 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1833496 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1833496' 00:09:18.101 killing process with pid 1833496 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1833496 00:09:18.101 11:35:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1833496 00:09:18.360 11:35:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:18.360 11:35:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:18.360 11:35:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:18.360 11:35:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:18.360 11:35:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:18.360 11:35:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.360 11:35:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:18.360 11:35:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.898 11:35:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:20.898 00:09:20.898 real 0m11.165s 00:09:20.898 user 0m9.489s 00:09:20.898 sys 0m5.888s 00:09:20.898 11:35:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:20.898 11:35:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:20.898 ************************************ 00:09:20.898 END TEST nvmf_multitarget 00:09:20.898 ************************************ 00:09:20.898 11:35:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:20.898 11:35:48 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:20.898 11:35:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:20.898 11:35:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.898 11:35:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:20.898 ************************************ 00:09:20.898 START TEST nvmf_rpc 00:09:20.898 ************************************ 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:20.898 * Looking for test storage... 00:09:20.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:20.898 11:35:48 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.464 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:27.464 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:27.464 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:27.464 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:27.464 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:27.464 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:27.464 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:27.464 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:27.464 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:27.465 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:27.465 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:27.465 Found net devices under 0000:af:00.0: cvl_0_0 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:27.465 Found net devices under 0000:af:00.1: cvl_0_1 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:27.465 11:35:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:27.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:27.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:09:27.465 00:09:27.465 --- 10.0.0.2 ping statistics --- 00:09:27.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.465 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:27.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:27.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:09:27.465 00:09:27.465 --- 10.0.0.1 ping statistics --- 00:09:27.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:27.465 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1837488 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1837488 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1837488 ']' 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.465 11:35:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.465 [2024-07-15 11:35:55.210511] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:09:27.465 [2024-07-15 11:35:55.210558] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.465 EAL: No free 2048 kB hugepages reported on node 1 00:09:27.465 [2024-07-15 11:35:55.283934] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:27.465 [2024-07-15 11:35:55.352561] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.465 [2024-07-15 11:35:55.352606] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.465 [2024-07-15 11:35:55.352615] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.465 [2024-07-15 11:35:55.352623] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.465 [2024-07-15 11:35:55.352629] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.465 [2024-07-15 11:35:55.352689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.465 [2024-07-15 11:35:55.352787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.465 [2024-07-15 11:35:55.352853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.465 [2024-07-15 11:35:55.352855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.034 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.034 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:28.034 11:35:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:28.034 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:28.034 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.034 11:35:56 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.034 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:28.034 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.034 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.034 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.034 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:28.034 "tick_rate": 2500000000, 00:09:28.034 "poll_groups": [ 00:09:28.034 { 00:09:28.034 "name": "nvmf_tgt_poll_group_000", 00:09:28.034 "admin_qpairs": 0, 00:09:28.034 "io_qpairs": 0, 00:09:28.034 "current_admin_qpairs": 0, 00:09:28.034 "current_io_qpairs": 0, 00:09:28.034 "pending_bdev_io": 0, 00:09:28.034 "completed_nvme_io": 0, 00:09:28.034 "transports": [] 00:09:28.034 }, 00:09:28.034 { 00:09:28.034 "name": "nvmf_tgt_poll_group_001", 00:09:28.034 "admin_qpairs": 0, 00:09:28.034 "io_qpairs": 0, 00:09:28.034 "current_admin_qpairs": 0, 00:09:28.034 "current_io_qpairs": 0, 00:09:28.034 "pending_bdev_io": 0, 00:09:28.034 "completed_nvme_io": 0, 00:09:28.034 "transports": [] 00:09:28.034 }, 00:09:28.034 { 00:09:28.034 "name": "nvmf_tgt_poll_group_002", 00:09:28.034 "admin_qpairs": 0, 00:09:28.034 "io_qpairs": 0, 00:09:28.034 "current_admin_qpairs": 0, 00:09:28.034 "current_io_qpairs": 0, 00:09:28.034 "pending_bdev_io": 0, 00:09:28.034 "completed_nvme_io": 0, 00:09:28.034 "transports": [] 00:09:28.034 }, 00:09:28.034 { 00:09:28.034 "name": "nvmf_tgt_poll_group_003", 00:09:28.034 "admin_qpairs": 0, 00:09:28.034 "io_qpairs": 0, 00:09:28.034 "current_admin_qpairs": 0, 00:09:28.034 "current_io_qpairs": 0, 00:09:28.034 "pending_bdev_io": 0, 00:09:28.034 "completed_nvme_io": 0, 00:09:28.034 "transports": [] 00:09:28.034 } 00:09:28.034 ] 00:09:28.034 }' 00:09:28.034 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:28.034 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:28.034 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:28.034 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:28.034 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:28.034 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.294 [2024-07-15 11:35:56.174082] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:28.294 "tick_rate": 2500000000, 00:09:28.294 "poll_groups": [ 00:09:28.294 { 00:09:28.294 "name": "nvmf_tgt_poll_group_000", 00:09:28.294 "admin_qpairs": 0, 00:09:28.294 "io_qpairs": 0, 00:09:28.294 "current_admin_qpairs": 0, 00:09:28.294 "current_io_qpairs": 0, 00:09:28.294 "pending_bdev_io": 0, 00:09:28.294 "completed_nvme_io": 0, 00:09:28.294 "transports": [ 00:09:28.294 { 00:09:28.294 "trtype": "TCP" 00:09:28.294 } 00:09:28.294 ] 00:09:28.294 }, 00:09:28.294 { 00:09:28.294 "name": "nvmf_tgt_poll_group_001", 00:09:28.294 "admin_qpairs": 0, 00:09:28.294 "io_qpairs": 0, 00:09:28.294 "current_admin_qpairs": 0, 00:09:28.294 "current_io_qpairs": 0, 00:09:28.294 "pending_bdev_io": 0, 00:09:28.294 "completed_nvme_io": 0, 00:09:28.294 "transports": [ 00:09:28.294 { 00:09:28.294 "trtype": "TCP" 00:09:28.294 } 00:09:28.294 ] 00:09:28.294 }, 00:09:28.294 { 00:09:28.294 "name": "nvmf_tgt_poll_group_002", 00:09:28.294 "admin_qpairs": 0, 00:09:28.294 "io_qpairs": 0, 00:09:28.294 "current_admin_qpairs": 0, 00:09:28.294 "current_io_qpairs": 0, 00:09:28.294 "pending_bdev_io": 0, 00:09:28.294 "completed_nvme_io": 0, 00:09:28.294 "transports": [ 00:09:28.294 { 00:09:28.294 "trtype": "TCP" 00:09:28.294 } 00:09:28.294 ] 00:09:28.294 }, 00:09:28.294 { 00:09:28.294 "name": "nvmf_tgt_poll_group_003", 00:09:28.294 "admin_qpairs": 0, 00:09:28.294 "io_qpairs": 0, 00:09:28.294 "current_admin_qpairs": 0, 00:09:28.294 "current_io_qpairs": 0, 00:09:28.294 "pending_bdev_io": 0, 00:09:28.294 "completed_nvme_io": 0, 00:09:28.294 "transports": [ 00:09:28.294 { 00:09:28.294 "trtype": "TCP" 00:09:28.294 } 00:09:28.294 ] 00:09:28.294 } 00:09:28.294 ] 00:09:28.294 }' 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.294 Malloc1 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.294 [2024-07-15 11:35:56.353194] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:28.294 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.2 -s 4420 00:09:28.294 [2024-07-15 11:35:56.381923] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:09:28.553 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:28.553 could not add new controller: failed to write to nvme-fabrics device 00:09:28.554 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:28.554 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:28.554 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:28.554 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:28.554 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:28.554 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.554 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.554 11:35:56 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.554 11:35:56 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:29.932 11:35:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:29.932 11:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:29.932 11:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:29.932 11:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:29.932 11:35:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:31.919 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:31.919 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:31.919 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:31.919 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:31.919 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:31.919 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:31.919 11:35:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:31.919 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:31.920 11:35:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:31.920 [2024-07-15 11:35:59.980999] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e' 00:09:31.920 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:31.920 could not add new controller: failed to write to nvme-fabrics device 00:09:31.920 11:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:31.920 11:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:31.920 11:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:31.920 11:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:31.920 11:36:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:31.920 11:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.920 11:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.920 11:36:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.920 11:36:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:33.298 11:36:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:33.298 11:36:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:33.298 11:36:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:33.298 11:36:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:33.298 11:36:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:35.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.834 [2024-07-15 11:36:03.530244] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.834 11:36:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:36.769 11:36:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:36.769 11:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:36.769 11:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:36.769 11:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:36.769 11:36:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:39.303 11:36:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:39.303 11:36:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:39.303 11:36:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:39.303 11:36:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:39.303 11:36:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:39.303 11:36:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:39.303 11:36:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:39.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.303 11:36:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:39.303 11:36:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:39.303 11:36:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:39.303 11:36:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:39.303 11:36:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:39.303 11:36:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:39.303 11:36:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:39.303 11:36:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:39.303 11:36:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.303 11:36:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.303 11:36:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.303 11:36:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:39.303 11:36:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.303 11:36:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.303 11:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.303 11:36:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:39.303 11:36:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:39.303 11:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.303 11:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.303 11:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.303 11:36:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.303 11:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.303 11:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.303 [2024-07-15 11:36:07.015109] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.303 11:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.303 11:36:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:39.303 11:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.303 11:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.303 11:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.303 11:36:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:39.303 11:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.303 11:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.303 11:36:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.303 11:36:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:40.680 11:36:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:40.680 11:36:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:40.680 11:36:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:40.680 11:36:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:40.680 11:36:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:42.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.580 [2024-07-15 11:36:10.553532] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.580 11:36:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:43.956 11:36:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:43.956 11:36:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:43.956 11:36:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:43.956 11:36:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:43.956 11:36:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:45.860 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:45.860 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:45.860 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:45.860 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:45.860 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:45.860 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:45.860 11:36:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:45.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.860 11:36:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:45.860 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:45.860 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:45.860 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:45.860 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:45.860 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.118 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:46.118 11:36:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:46.118 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.118 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.118 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.118 11:36:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:46.118 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.118 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.118 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.118 11:36:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:46.118 11:36:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:46.118 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.118 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.118 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.118 11:36:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:46.118 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.118 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.118 [2024-07-15 11:36:13.997330] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.118 11:36:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.118 11:36:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:46.118 11:36:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.118 11:36:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.118 11:36:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.118 11:36:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:46.118 11:36:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.118 11:36:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.118 11:36:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.118 11:36:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:47.494 11:36:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:47.494 11:36:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:47.494 11:36:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:47.494 11:36:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:47.494 11:36:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:49.399 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:49.399 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:49.399 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:49.399 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:49.399 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:49.399 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:49.399 11:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:49.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.399 11:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:49.399 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:49.399 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:49.399 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.399 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:49.399 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.658 [2024-07-15 11:36:17.539013] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.658 11:36:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:51.034 11:36:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:51.034 11:36:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:51.034 11:36:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:51.034 11:36:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:51.034 11:36:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:52.936 11:36:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:52.936 11:36:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:52.936 11:36:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:52.936 11:36:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:52.936 11:36:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:52.936 11:36:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:52.936 11:36:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:52.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.936 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:52.936 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:52.936 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:52.936 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.936 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:52.936 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.196 [2024-07-15 11:36:21.084098] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.196 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.197 [2024-07-15 11:36:21.132206] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.197 [2024-07-15 11:36:21.184344] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.197 [2024-07-15 11:36:21.232521] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.197 [2024-07-15 11:36:21.280710] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.197 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:53.457 "tick_rate": 2500000000, 00:09:53.457 "poll_groups": [ 00:09:53.457 { 00:09:53.457 "name": "nvmf_tgt_poll_group_000", 00:09:53.457 "admin_qpairs": 2, 00:09:53.457 "io_qpairs": 196, 00:09:53.457 "current_admin_qpairs": 0, 00:09:53.457 "current_io_qpairs": 0, 00:09:53.457 "pending_bdev_io": 0, 00:09:53.457 "completed_nvme_io": 309, 00:09:53.457 "transports": [ 00:09:53.457 { 00:09:53.457 "trtype": "TCP" 00:09:53.457 } 00:09:53.457 ] 00:09:53.457 }, 00:09:53.457 { 00:09:53.457 "name": "nvmf_tgt_poll_group_001", 00:09:53.457 "admin_qpairs": 2, 00:09:53.457 "io_qpairs": 196, 00:09:53.457 "current_admin_qpairs": 0, 00:09:53.457 "current_io_qpairs": 0, 00:09:53.457 "pending_bdev_io": 0, 00:09:53.457 "completed_nvme_io": 236, 00:09:53.457 "transports": [ 00:09:53.457 { 00:09:53.457 "trtype": "TCP" 00:09:53.457 } 00:09:53.457 ] 00:09:53.457 }, 00:09:53.457 { 00:09:53.457 "name": "nvmf_tgt_poll_group_002", 00:09:53.457 "admin_qpairs": 1, 00:09:53.457 "io_qpairs": 196, 00:09:53.457 "current_admin_qpairs": 0, 00:09:53.457 "current_io_qpairs": 0, 00:09:53.457 "pending_bdev_io": 0, 00:09:53.457 "completed_nvme_io": 295, 00:09:53.457 "transports": [ 00:09:53.457 { 00:09:53.457 "trtype": "TCP" 00:09:53.457 } 00:09:53.457 ] 00:09:53.457 }, 00:09:53.457 { 00:09:53.457 "name": "nvmf_tgt_poll_group_003", 00:09:53.457 "admin_qpairs": 2, 00:09:53.457 "io_qpairs": 196, 00:09:53.457 "current_admin_qpairs": 0, 00:09:53.457 "current_io_qpairs": 0, 00:09:53.457 "pending_bdev_io": 0, 00:09:53.457 "completed_nvme_io": 294, 00:09:53.457 "transports": [ 00:09:53.457 { 00:09:53.457 "trtype": "TCP" 00:09:53.457 } 00:09:53.457 ] 00:09:53.457 } 00:09:53.457 ] 00:09:53.457 }' 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 784 > 0 )) 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:53.457 rmmod nvme_tcp 00:09:53.457 rmmod nvme_fabrics 00:09:53.457 rmmod nvme_keyring 00:09:53.457 11:36:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:53.458 11:36:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:53.458 11:36:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:53.458 11:36:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1837488 ']' 00:09:53.458 11:36:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1837488 00:09:53.458 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1837488 ']' 00:09:53.458 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1837488 00:09:53.458 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:53.458 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:53.458 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1837488 00:09:53.717 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:53.717 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:53.717 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1837488' 00:09:53.717 killing process with pid 1837488 00:09:53.717 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1837488 00:09:53.717 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1837488 00:09:53.717 11:36:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:53.717 11:36:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:53.717 11:36:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:53.717 11:36:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:53.717 11:36:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:53.717 11:36:21 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.717 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:53.717 11:36:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.275 11:36:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:56.275 00:09:56.275 real 0m35.332s 00:09:56.275 user 1m46.149s 00:09:56.275 sys 0m7.984s 00:09:56.275 11:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.275 11:36:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:56.275 ************************************ 00:09:56.275 END TEST nvmf_rpc 00:09:56.275 ************************************ 00:09:56.275 11:36:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:56.275 11:36:23 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:56.275 11:36:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:56.275 11:36:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.275 11:36:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:56.275 ************************************ 00:09:56.275 START TEST nvmf_invalid 00:09:56.275 ************************************ 00:09:56.275 11:36:23 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:09:56.275 * Looking for test storage... 00:09:56.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:56.275 11:36:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:02.846 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:02.846 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:02.846 Found net devices under 0000:af:00.0: cvl_0_0 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:02.846 Found net devices under 0000:af:00.1: cvl_0_1 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:02.846 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:02.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:10:02.847 00:10:02.847 --- 10.0.0.2 ping statistics --- 00:10:02.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.847 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:10:02.847 00:10:02.847 --- 10.0.0.1 ping statistics --- 00:10:02.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.847 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1846268 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1846268 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1846268 ']' 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:02.847 11:36:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:02.847 [2024-07-15 11:36:30.773098] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:10:02.847 [2024-07-15 11:36:30.773148] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.847 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.847 [2024-07-15 11:36:30.849783] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:02.847 [2024-07-15 11:36:30.924774] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.847 [2024-07-15 11:36:30.924812] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.847 [2024-07-15 11:36:30.924822] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.847 [2024-07-15 11:36:30.924830] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.847 [2024-07-15 11:36:30.924851] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.847 [2024-07-15 11:36:30.924896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.847 [2024-07-15 11:36:30.924990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.847 [2024-07-15 11:36:30.925075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.847 [2024-07-15 11:36:30.925077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.781 11:36:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:03.781 11:36:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:10:03.781 11:36:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:03.781 11:36:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:03.781 11:36:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:03.781 11:36:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.781 11:36:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:03.781 11:36:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode32489 00:10:03.781 [2024-07-15 11:36:31.776168] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:03.781 11:36:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:03.781 { 00:10:03.781 "nqn": "nqn.2016-06.io.spdk:cnode32489", 00:10:03.781 "tgt_name": "foobar", 00:10:03.781 "method": "nvmf_create_subsystem", 00:10:03.781 "req_id": 1 00:10:03.781 } 00:10:03.781 Got JSON-RPC error response 00:10:03.781 response: 00:10:03.781 { 00:10:03.781 "code": -32603, 00:10:03.781 "message": "Unable to find target foobar" 00:10:03.781 }' 00:10:03.781 11:36:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:03.781 { 00:10:03.781 "nqn": "nqn.2016-06.io.spdk:cnode32489", 00:10:03.781 "tgt_name": "foobar", 00:10:03.781 "method": "nvmf_create_subsystem", 00:10:03.781 "req_id": 1 00:10:03.781 } 00:10:03.781 Got JSON-RPC error response 00:10:03.781 response: 00:10:03.781 { 00:10:03.781 "code": -32603, 00:10:03.781 "message": "Unable to find target foobar" 00:10:03.781 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:03.781 11:36:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:03.781 11:36:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19733 00:10:04.040 [2024-07-15 11:36:31.956828] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19733: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:04.040 11:36:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:04.040 { 00:10:04.040 "nqn": "nqn.2016-06.io.spdk:cnode19733", 00:10:04.040 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:04.040 "method": "nvmf_create_subsystem", 00:10:04.040 "req_id": 1 00:10:04.040 } 00:10:04.040 Got JSON-RPC error response 00:10:04.040 response: 00:10:04.040 { 00:10:04.040 "code": -32602, 00:10:04.040 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:04.040 }' 00:10:04.040 11:36:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:04.040 { 00:10:04.040 "nqn": "nqn.2016-06.io.spdk:cnode19733", 00:10:04.040 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:04.040 "method": "nvmf_create_subsystem", 00:10:04.040 "req_id": 1 00:10:04.040 } 00:10:04.040 Got JSON-RPC error response 00:10:04.040 response: 00:10:04.040 { 00:10:04.040 "code": -32602, 00:10:04.040 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:04.040 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:04.040 11:36:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:04.040 11:36:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode24749 00:10:04.300 [2024-07-15 11:36:32.149416] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24749: invalid model number 'SPDK_Controller' 00:10:04.300 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:04.300 { 00:10:04.300 "nqn": "nqn.2016-06.io.spdk:cnode24749", 00:10:04.300 "model_number": "SPDK_Controller\u001f", 00:10:04.300 "method": "nvmf_create_subsystem", 00:10:04.300 "req_id": 1 00:10:04.300 } 00:10:04.300 Got JSON-RPC error response 00:10:04.300 response: 00:10:04.300 { 00:10:04.300 "code": -32602, 00:10:04.300 "message": "Invalid MN SPDK_Controller\u001f" 00:10:04.300 }' 00:10:04.300 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:04.300 { 00:10:04.300 "nqn": "nqn.2016-06.io.spdk:cnode24749", 00:10:04.300 "model_number": "SPDK_Controller\u001f", 00:10:04.300 "method": "nvmf_create_subsystem", 00:10:04.300 "req_id": 1 00:10:04.300 } 00:10:04.300 Got JSON-RPC error response 00:10:04.300 response: 00:10:04.300 { 00:10:04.300 "code": -32602, 00:10:04.300 "message": "Invalid MN SPDK_Controller\u001f" 00:10:04.300 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:04.300 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:04.300 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:04.300 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:04.300 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:04.300 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:04.300 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:04.300 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.300 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:10:04.300 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ v == \- ]] 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'vn>tt$lFm>npL&h*<-'\''QB' 00:10:04.301 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'vn>tt$lFm>npL&h*<-'\''QB' nqn.2016-06.io.spdk:cnode11827 00:10:04.560 [2024-07-15 11:36:32.498573] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11827: invalid serial number 'vn>tt$lFm>npL&h*<-'QB' 00:10:04.560 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:04.560 { 00:10:04.560 "nqn": "nqn.2016-06.io.spdk:cnode11827", 00:10:04.560 "serial_number": "vn>tt$lFm>npL&h*<-'\''QB", 00:10:04.560 "method": "nvmf_create_subsystem", 00:10:04.560 "req_id": 1 00:10:04.560 } 00:10:04.560 Got JSON-RPC error response 00:10:04.560 response: 00:10:04.560 { 00:10:04.560 "code": -32602, 00:10:04.560 "message": "Invalid SN vn>tt$lFm>npL&h*<-'\''QB" 00:10:04.560 }' 00:10:04.560 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:04.560 { 00:10:04.560 "nqn": "nqn.2016-06.io.spdk:cnode11827", 00:10:04.560 "serial_number": "vn>tt$lFm>npL&h*<-'QB", 00:10:04.560 "method": "nvmf_create_subsystem", 00:10:04.560 "req_id": 1 00:10:04.560 } 00:10:04.560 Got JSON-RPC error response 00:10:04.560 response: 00:10:04.560 { 00:10:04.560 "code": -32602, 00:10:04.560 "message": "Invalid SN vn>tt$lFm>npL&h*<-'QB" 00:10:04.560 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:04.560 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:04.560 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:04.560 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:04.560 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:04.560 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:04.560 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:04.560 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.560 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:10:04.560 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:10:04.560 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:10:04.560 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.560 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.560 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:04.560 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:04.560 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:04.560 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.560 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.560 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:04.560 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:04.560 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.561 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.821 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[  == \- ]] 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ']|0~A[$|msKz)0ZiV4F]_!FhB)r[HAQfInxo)udP' 00:10:04.822 11:36:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ']|0~A[$|msKz)0ZiV4F]_!FhB)r[HAQfInxo)udP' nqn.2016-06.io.spdk:cnode29620 00:10:05.080 [2024-07-15 11:36:32.996385] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29620: invalid model number ']|0~A[$|msKz)0ZiV4F]_!FhB)r[HAQfInxo)udP' 00:10:05.080 11:36:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:10:05.080 { 00:10:05.080 "nqn": "nqn.2016-06.io.spdk:cnode29620", 00:10:05.080 "model_number": "\u007f]|0~A[$|msKz)0ZiV4F]_!FhB)r[HAQfInxo)udP", 00:10:05.080 "method": "nvmf_create_subsystem", 00:10:05.080 "req_id": 1 00:10:05.080 } 00:10:05.080 Got JSON-RPC error response 00:10:05.080 response: 00:10:05.080 { 00:10:05.080 "code": -32602, 00:10:05.080 "message": "Invalid MN \u007f]|0~A[$|msKz)0ZiV4F]_!FhB)r[HAQfInxo)udP" 00:10:05.081 }' 00:10:05.081 11:36:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:10:05.081 { 00:10:05.081 "nqn": "nqn.2016-06.io.spdk:cnode29620", 00:10:05.081 "model_number": "\u007f]|0~A[$|msKz)0ZiV4F]_!FhB)r[HAQfInxo)udP", 00:10:05.081 "method": "nvmf_create_subsystem", 00:10:05.081 "req_id": 1 00:10:05.081 } 00:10:05.081 Got JSON-RPC error response 00:10:05.081 response: 00:10:05.081 { 00:10:05.081 "code": -32602, 00:10:05.081 "message": "Invalid MN \u007f]|0~A[$|msKz)0ZiV4F]_!FhB)r[HAQfInxo)udP" 00:10:05.081 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:05.081 11:36:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:05.081 [2024-07-15 11:36:33.181060] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:05.339 11:36:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:05.339 11:36:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:05.339 11:36:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:05.339 11:36:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:10:05.339 11:36:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:10:05.339 11:36:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:05.597 [2024-07-15 11:36:33.562320] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:05.597 11:36:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:10:05.597 { 00:10:05.597 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:05.597 "listen_address": { 00:10:05.597 "trtype": "tcp", 00:10:05.597 "traddr": "", 00:10:05.597 "trsvcid": "4421" 00:10:05.597 }, 00:10:05.597 "method": "nvmf_subsystem_remove_listener", 00:10:05.597 "req_id": 1 00:10:05.597 } 00:10:05.597 Got JSON-RPC error response 00:10:05.597 response: 00:10:05.597 { 00:10:05.597 "code": -32602, 00:10:05.597 "message": "Invalid parameters" 00:10:05.597 }' 00:10:05.597 11:36:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:10:05.597 { 00:10:05.597 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:05.597 "listen_address": { 00:10:05.597 "trtype": "tcp", 00:10:05.597 "traddr": "", 00:10:05.597 "trsvcid": "4421" 00:10:05.597 }, 00:10:05.597 "method": "nvmf_subsystem_remove_listener", 00:10:05.597 "req_id": 1 00:10:05.597 } 00:10:05.597 Got JSON-RPC error response 00:10:05.597 response: 00:10:05.597 { 00:10:05.597 "code": -32602, 00:10:05.597 "message": "Invalid parameters" 00:10:05.597 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:05.597 11:36:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8773 -i 0 00:10:05.855 [2024-07-15 11:36:33.754918] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8773: invalid cntlid range [0-65519] 00:10:05.855 11:36:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:10:05.855 { 00:10:05.855 "nqn": "nqn.2016-06.io.spdk:cnode8773", 00:10:05.855 "min_cntlid": 0, 00:10:05.855 "method": "nvmf_create_subsystem", 00:10:05.855 "req_id": 1 00:10:05.855 } 00:10:05.855 Got JSON-RPC error response 00:10:05.855 response: 00:10:05.855 { 00:10:05.855 "code": -32602, 00:10:05.855 "message": "Invalid cntlid range [0-65519]" 00:10:05.855 }' 00:10:05.855 11:36:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:10:05.855 { 00:10:05.855 "nqn": "nqn.2016-06.io.spdk:cnode8773", 00:10:05.855 "min_cntlid": 0, 00:10:05.855 "method": "nvmf_create_subsystem", 00:10:05.855 "req_id": 1 00:10:05.855 } 00:10:05.855 Got JSON-RPC error response 00:10:05.855 response: 00:10:05.855 { 00:10:05.855 "code": -32602, 00:10:05.855 "message": "Invalid cntlid range [0-65519]" 00:10:05.855 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:05.855 11:36:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28084 -i 65520 00:10:05.855 [2024-07-15 11:36:33.947613] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28084: invalid cntlid range [65520-65519] 00:10:06.114 11:36:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:10:06.114 { 00:10:06.114 "nqn": "nqn.2016-06.io.spdk:cnode28084", 00:10:06.114 "min_cntlid": 65520, 00:10:06.114 "method": "nvmf_create_subsystem", 00:10:06.114 "req_id": 1 00:10:06.114 } 00:10:06.114 Got JSON-RPC error response 00:10:06.114 response: 00:10:06.114 { 00:10:06.114 "code": -32602, 00:10:06.114 "message": "Invalid cntlid range [65520-65519]" 00:10:06.114 }' 00:10:06.114 11:36:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:10:06.114 { 00:10:06.114 "nqn": "nqn.2016-06.io.spdk:cnode28084", 00:10:06.114 "min_cntlid": 65520, 00:10:06.114 "method": "nvmf_create_subsystem", 00:10:06.114 "req_id": 1 00:10:06.114 } 00:10:06.114 Got JSON-RPC error response 00:10:06.114 response: 00:10:06.114 { 00:10:06.114 "code": -32602, 00:10:06.114 "message": "Invalid cntlid range [65520-65519]" 00:10:06.114 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:06.114 11:36:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7494 -I 0 00:10:06.114 [2024-07-15 11:36:34.136183] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7494: invalid cntlid range [1-0] 00:10:06.114 11:36:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:10:06.114 { 00:10:06.114 "nqn": "nqn.2016-06.io.spdk:cnode7494", 00:10:06.114 "max_cntlid": 0, 00:10:06.114 "method": "nvmf_create_subsystem", 00:10:06.114 "req_id": 1 00:10:06.114 } 00:10:06.114 Got JSON-RPC error response 00:10:06.114 response: 00:10:06.114 { 00:10:06.114 "code": -32602, 00:10:06.114 "message": "Invalid cntlid range [1-0]" 00:10:06.114 }' 00:10:06.114 11:36:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:10:06.114 { 00:10:06.114 "nqn": "nqn.2016-06.io.spdk:cnode7494", 00:10:06.114 "max_cntlid": 0, 00:10:06.114 "method": "nvmf_create_subsystem", 00:10:06.114 "req_id": 1 00:10:06.114 } 00:10:06.114 Got JSON-RPC error response 00:10:06.114 response: 00:10:06.114 { 00:10:06.114 "code": -32602, 00:10:06.114 "message": "Invalid cntlid range [1-0]" 00:10:06.114 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:06.114 11:36:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21846 -I 65520 00:10:06.399 [2024-07-15 11:36:34.320784] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21846: invalid cntlid range [1-65520] 00:10:06.399 11:36:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:10:06.399 { 00:10:06.399 "nqn": "nqn.2016-06.io.spdk:cnode21846", 00:10:06.399 "max_cntlid": 65520, 00:10:06.399 "method": "nvmf_create_subsystem", 00:10:06.399 "req_id": 1 00:10:06.399 } 00:10:06.399 Got JSON-RPC error response 00:10:06.399 response: 00:10:06.399 { 00:10:06.399 "code": -32602, 00:10:06.399 "message": "Invalid cntlid range [1-65520]" 00:10:06.399 }' 00:10:06.399 11:36:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:10:06.399 { 00:10:06.399 "nqn": "nqn.2016-06.io.spdk:cnode21846", 00:10:06.399 "max_cntlid": 65520, 00:10:06.399 "method": "nvmf_create_subsystem", 00:10:06.399 "req_id": 1 00:10:06.399 } 00:10:06.399 Got JSON-RPC error response 00:10:06.399 response: 00:10:06.399 { 00:10:06.399 "code": -32602, 00:10:06.399 "message": "Invalid cntlid range [1-65520]" 00:10:06.399 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:06.399 11:36:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5650 -i 6 -I 5 00:10:06.658 [2024-07-15 11:36:34.505403] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5650: invalid cntlid range [6-5] 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:10:06.658 { 00:10:06.658 "nqn": "nqn.2016-06.io.spdk:cnode5650", 00:10:06.658 "min_cntlid": 6, 00:10:06.658 "max_cntlid": 5, 00:10:06.658 "method": "nvmf_create_subsystem", 00:10:06.658 "req_id": 1 00:10:06.658 } 00:10:06.658 Got JSON-RPC error response 00:10:06.658 response: 00:10:06.658 { 00:10:06.658 "code": -32602, 00:10:06.658 "message": "Invalid cntlid range [6-5]" 00:10:06.658 }' 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:10:06.658 { 00:10:06.658 "nqn": "nqn.2016-06.io.spdk:cnode5650", 00:10:06.658 "min_cntlid": 6, 00:10:06.658 "max_cntlid": 5, 00:10:06.658 "method": "nvmf_create_subsystem", 00:10:06.658 "req_id": 1 00:10:06.658 } 00:10:06.658 Got JSON-RPC error response 00:10:06.658 response: 00:10:06.658 { 00:10:06.658 "code": -32602, 00:10:06.658 "message": "Invalid cntlid range [6-5]" 00:10:06.658 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:06.658 { 00:10:06.658 "name": "foobar", 00:10:06.658 "method": "nvmf_delete_target", 00:10:06.658 "req_id": 1 00:10:06.658 } 00:10:06.658 Got JSON-RPC error response 00:10:06.658 response: 00:10:06.658 { 00:10:06.658 "code": -32602, 00:10:06.658 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:06.658 }' 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:06.658 { 00:10:06.658 "name": "foobar", 00:10:06.658 "method": "nvmf_delete_target", 00:10:06.658 "req_id": 1 00:10:06.658 } 00:10:06.658 Got JSON-RPC error response 00:10:06.658 response: 00:10:06.658 { 00:10:06.658 "code": -32602, 00:10:06.658 "message": "The specified target doesn't exist, cannot delete it." 00:10:06.658 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:06.658 rmmod nvme_tcp 00:10:06.658 rmmod nvme_fabrics 00:10:06.658 rmmod nvme_keyring 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1846268 ']' 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1846268 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1846268 ']' 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1846268 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:06.658 11:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1846268 00:10:06.917 11:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:06.917 11:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:06.917 11:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1846268' 00:10:06.917 killing process with pid 1846268 00:10:06.917 11:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1846268 00:10:06.917 11:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1846268 00:10:06.917 11:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:06.917 11:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:06.917 11:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:06.917 11:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:06.917 11:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:06.917 11:36:34 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.917 11:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.917 11:36:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.450 11:36:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:09.450 00:10:09.450 real 0m13.105s 00:10:09.450 user 0m20.080s 00:10:09.450 sys 0m6.243s 00:10:09.450 11:36:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:09.450 11:36:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:09.450 ************************************ 00:10:09.450 END TEST nvmf_invalid 00:10:09.450 ************************************ 00:10:09.450 11:36:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:09.450 11:36:37 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:09.450 11:36:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:09.450 11:36:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:09.450 11:36:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:09.450 ************************************ 00:10:09.450 START TEST nvmf_abort 00:10:09.450 ************************************ 00:10:09.450 11:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:09.450 * Looking for test storage... 00:10:09.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.450 11:36:37 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.450 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:09.450 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.450 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.450 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.450 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.450 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.450 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.450 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.450 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.450 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:09.451 11:36:37 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:16.023 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:16.023 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:16.023 Found net devices under 0000:af:00.0: cvl_0_0 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:16.023 Found net devices under 0000:af:00.1: cvl_0_1 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:16.023 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:16.024 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:16.024 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:16.024 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:16.024 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.024 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.024 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.024 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:16.024 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.024 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.024 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:16.024 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.024 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.024 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:16.024 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:16.024 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.024 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:16.024 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:16.024 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:16.024 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:16.024 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:16.024 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:16.024 11:36:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:16.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:10:16.024 00:10:16.024 --- 10.0.0.2 ping statistics --- 00:10:16.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.024 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:16.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:10:16.024 00:10:16.024 --- 10.0.0.1 ping statistics --- 00:10:16.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.024 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1850791 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1850791 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1850791 ']' 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:16.024 11:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:16.024 [2024-07-15 11:36:44.122571] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:10:16.024 [2024-07-15 11:36:44.122628] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.283 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.283 [2024-07-15 11:36:44.199680] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:16.283 [2024-07-15 11:36:44.273398] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.283 [2024-07-15 11:36:44.273434] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.283 [2024-07-15 11:36:44.273444] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.283 [2024-07-15 11:36:44.273452] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.283 [2024-07-15 11:36:44.273459] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.283 [2024-07-15 11:36:44.273565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:16.283 [2024-07-15 11:36:44.273652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:16.283 [2024-07-15 11:36:44.273654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.851 11:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:16.851 11:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:10:16.851 11:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:16.851 11:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:16.851 11:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:17.111 11:36:44 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.111 11:36:44 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:17.111 11:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.111 11:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:17.111 [2024-07-15 11:36:44.980817] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.111 11:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.111 11:36:44 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:17.111 11:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.111 11:36:44 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:17.111 Malloc0 00:10:17.111 11:36:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.111 11:36:45 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:17.111 11:36:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.111 11:36:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:17.111 Delay0 00:10:17.111 11:36:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.111 11:36:45 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:17.111 11:36:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.111 11:36:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:17.111 11:36:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.111 11:36:45 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:17.111 11:36:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.111 11:36:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:17.111 11:36:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.111 11:36:45 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:17.111 11:36:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.111 11:36:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:17.111 [2024-07-15 11:36:45.064226] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.111 11:36:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.111 11:36:45 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:17.111 11:36:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.111 11:36:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:17.111 11:36:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.111 11:36:45 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:17.111 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.111 [2024-07-15 11:36:45.172237] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:19.649 Initializing NVMe Controllers 00:10:19.649 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:19.649 controller IO queue size 128 less than required 00:10:19.649 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:19.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:19.649 Initialization complete. Launching workers. 00:10:19.649 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 42146 00:10:19.649 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42207, failed to submit 62 00:10:19.649 success 42150, unsuccess 57, failed 0 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:19.649 rmmod nvme_tcp 00:10:19.649 rmmod nvme_fabrics 00:10:19.649 rmmod nvme_keyring 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1850791 ']' 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1850791 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1850791 ']' 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1850791 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1850791 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1850791' 00:10:19.649 killing process with pid 1850791 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1850791 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1850791 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:19.649 11:36:47 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.186 11:36:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:22.186 00:10:22.186 real 0m12.647s 00:10:22.186 user 0m13.527s 00:10:22.186 sys 0m6.471s 00:10:22.186 11:36:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:22.186 11:36:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:22.186 ************************************ 00:10:22.186 END TEST nvmf_abort 00:10:22.186 ************************************ 00:10:22.186 11:36:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:22.186 11:36:49 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:22.186 11:36:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:22.186 11:36:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.186 11:36:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:22.186 ************************************ 00:10:22.186 START TEST nvmf_ns_hotplug_stress 00:10:22.186 ************************************ 00:10:22.186 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:22.186 * Looking for test storage... 00:10:22.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.186 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.186 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:22.186 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.186 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.186 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.186 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.186 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.186 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.186 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.186 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.186 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.186 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.186 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:10:22.186 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:10:22.186 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.186 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.186 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.186 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.186 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:22.187 11:36:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.782 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:28.782 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:28.782 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:28.782 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:28.782 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:28.783 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:28.783 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:28.783 Found net devices under 0000:af:00.0: cvl_0_0 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:28.783 Found net devices under 0000:af:00.1: cvl_0_1 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:28.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:28.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:10:28.783 00:10:28.783 --- 10.0.0.2 ping statistics --- 00:10:28.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.783 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:28.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:28.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:10:28.783 00:10:28.783 --- 10.0.0.1 ping statistics --- 00:10:28.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.783 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1855153 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1855153 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1855153 ']' 00:10:28.783 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.784 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:28.784 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.784 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:28.784 11:36:56 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:28.784 [2024-07-15 11:36:56.789509] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:10:28.784 [2024-07-15 11:36:56.789556] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.784 EAL: No free 2048 kB hugepages reported on node 1 00:10:28.784 [2024-07-15 11:36:56.864238] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:29.044 [2024-07-15 11:36:56.937848] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:29.044 [2024-07-15 11:36:56.937907] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:29.045 [2024-07-15 11:36:56.937920] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:29.045 [2024-07-15 11:36:56.937928] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:29.045 [2024-07-15 11:36:56.937935] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:29.045 [2024-07-15 11:36:56.938038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:29.045 [2024-07-15 11:36:56.938124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:29.045 [2024-07-15 11:36:56.938126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:29.613 11:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:29.613 11:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:29.613 11:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:29.613 11:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:29.613 11:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:29.613 11:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:29.613 11:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:29.613 11:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:29.872 [2024-07-15 11:36:57.794543] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:29.872 11:36:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:30.131 11:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:30.131 [2024-07-15 11:36:58.168230] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:30.131 11:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:30.391 11:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:30.650 Malloc0 00:10:30.650 11:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:30.650 Delay0 00:10:30.650 11:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.908 11:36:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:31.167 NULL1 00:10:31.167 11:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:31.167 11:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1855574 00:10:31.167 11:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:31.167 11:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:31.167 11:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.426 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.426 Read completed with error (sct=0, sc=11) 00:10:31.426 11:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.426 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.426 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.426 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.685 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.685 11:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:31.685 11:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:31.944 true 00:10:31.944 11:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:31.944 11:36:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.882 11:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.882 11:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:32.882 11:37:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:32.882 true 00:10:33.141 11:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:33.141 11:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.141 11:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.400 11:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:33.400 11:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:33.658 true 00:10:33.658 11:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:33.658 11:37:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.036 11:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:35.036 11:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:35.036 11:37:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:35.036 true 00:10:35.036 11:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:35.036 11:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.982 11:37:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.240 11:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:36.240 11:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:36.240 true 00:10:36.240 11:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:36.240 11:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.499 11:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:36.758 11:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:36.758 11:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:36.758 true 00:10:37.017 11:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:37.017 11:37:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:37.955 11:37:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:37.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.214 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:38.214 11:37:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:38.214 11:37:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:38.473 true 00:10:38.473 11:37:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:38.473 11:37:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.436 11:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.436 11:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:39.436 11:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:39.695 true 00:10:39.695 11:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:39.695 11:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.695 11:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.954 11:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:39.954 11:37:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:40.213 true 00:10:40.213 11:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:40.213 11:37:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.591 11:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.591 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:41.591 11:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:41.591 11:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:41.591 true 00:10:41.591 11:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:41.591 11:37:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.528 11:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.785 11:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:42.785 11:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:42.785 true 00:10:42.785 11:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:42.785 11:37:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.043 11:37:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.300 11:37:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:43.300 11:37:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:43.300 true 00:10:43.300 11:37:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:43.300 11:37:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.559 11:37:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.559 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.839 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:43.839 [2024-07-15 11:37:11.747716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.747797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.747854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.747901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.747948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.748007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.748052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.748097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.748147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.748192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.748233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.748277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.748323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.748364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.748407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.748453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.748505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.748550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.748598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.748641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.748689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.748738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.748785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.748830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.748882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.748926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.748967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.749971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.750008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.750058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.750098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.750144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.750175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.750213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.750249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.750295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.750337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.750376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.750428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.750467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.750506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.750554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.751101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.751144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.751187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.751226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.839 [2024-07-15 11:37:11.751265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.751304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.751346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.751388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.751432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.751477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.751526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.751573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.751616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.751662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.751704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.751752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.751802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.751850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.751901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.751948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.751994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.752038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.752086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.752142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.752185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.752230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.752274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.752322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.752368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.752412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.752454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.752499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.752548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.752594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.752641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.752683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.752719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.752763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.752802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.752845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.752893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.752935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.752979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.753021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.753062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.753100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.753131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.753170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.753208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.753247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.753298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.753339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.753384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.753426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.753473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.753505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.753542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.753584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.753623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.753662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.753703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.753744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.753783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.754295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.754352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.754399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.754446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.754490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.754540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.754588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.754633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.754678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.754724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.754770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.754815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.754867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.754913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.754958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.755999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.756040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.756079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.756128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.756166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.756205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.756249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.756280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.756320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.756359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.756408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.756448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.756489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.756530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.756570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.756611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.756659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.756699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.756735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.840 [2024-07-15 11:37:11.756768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.756809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.756851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.756892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.756934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.756976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.757014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.757052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.757211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.757594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.757635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.757676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.757713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.757760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.757807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.757860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.757907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.757951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.757996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.758992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.759035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.759071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.759113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.759153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.759199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.759238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.759277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.759318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.759361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.759412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.759459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.759506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.759555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.759604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.759656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.759698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.759742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.759785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.759836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.759883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.759935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.759979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.760023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.760072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.760114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.760166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.760212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.760259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.760303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.760347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.760855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.760902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.760949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.760993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:43.841 [2024-07-15 11:37:11.761191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.761981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.762019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.762064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.762106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.762145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.762185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.762222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.762265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.762305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.762346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.762383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.762420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.762464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.762507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.762560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.841 [2024-07-15 11:37:11.762605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.762645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.762692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.762736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.762775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.762817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.762870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.762914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.762957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.763002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.763050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.763094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.763141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.763186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.763232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.763279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.763323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.763370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.763412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.763444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.763482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.763957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.764967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.765983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.766017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.766052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.766098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.766136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.766177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.766215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.766256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.766295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.766336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.766376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.766420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.766465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.766516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.766563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.767071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.767121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.767167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.767212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.767261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.767309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.767350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.767394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.767437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.767487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.767533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.767577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.767621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.767666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.767711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.767755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.767804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.767853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.767904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.767945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.767990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.768033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.768076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.768120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.768166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.768210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.768254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.768298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.768343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.768388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.768431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.842 [2024-07-15 11:37:11.768475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.768516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.768554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.768594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.768637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.768675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.768716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.768752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.768791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.768836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.768886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.768928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.768974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.769015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.769057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.769099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.769141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.769173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.769215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.769256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.769294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.769340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.769380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.769418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.769468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.769505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.769545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.769591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.769623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.769663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.769702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.769743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.769781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.769979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.770324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.770378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.770420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.770465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.770512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.770555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.770596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.770644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.770683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.770724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.770768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.770809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.770858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.770898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.770942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.770984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.771044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.771080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.771128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.771166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.771203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.771240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.771282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.771324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.771364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.771407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.771449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.771487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.771523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.771565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.771619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.771664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.771710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.771758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.771806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.771857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.771905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.771957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.772007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.772054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.772101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.772146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.772194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.772239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.772286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.772331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.772379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.772428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.772479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.772526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.772572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.772620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.772665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.772712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.772759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.772802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.772855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.772903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.843 [2024-07-15 11:37:11.772949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.772999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.773046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.773093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.773139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.773635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.773681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.773721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.773771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.773812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.773863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.773897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.773939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.773980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.774973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.775019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.775066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.775112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.775157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.775210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.775256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.775303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.775354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.775401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.775448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.775495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.775543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.775589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.775638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.775684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.775717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.775756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.775801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.775844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.775894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.775935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.775977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.776020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.776064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.776112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.776154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.776185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.776226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.776263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.776302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.776349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.776395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.776979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.777022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.777064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.777109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.777154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.777204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.777258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.777303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.777354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.777399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.777443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.777487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.777532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.777578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.777631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.777677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.777724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.777768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.777814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.777867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.777912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.777963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.778008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.778063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.778110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.778157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.778203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.778247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.778295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.778347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.778395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.778450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.778497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.778548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.778593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.778642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.778691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.778737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.778783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.778836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.778884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.778929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.778961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.779002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.779043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.779090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.779132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.844 [2024-07-15 11:37:11.779173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.779218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.779259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.779298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.779337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.779378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.779425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.779457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.779494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.779531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.779568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.779609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.779649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.779694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.779735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.779780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.779823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.780403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.780447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.780485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 11:37:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:43.845 [2024-07-15 11:37:11.780521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.780562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.780605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.780643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.780689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.780733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.780781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 11:37:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:43.845 [2024-07-15 11:37:11.780826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.780880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.780925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.780974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.781024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.781073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.781124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.781170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.781221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.781263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.781309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.781352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.781393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.781436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.781479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.781522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.781567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.781610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.781657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.781708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.781750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.781791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.781843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.781888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.781932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.781982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.782962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.783002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.783042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.783082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.783120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.783617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.783664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.783711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.783758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.783813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.783867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.783911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.783962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.784008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.784052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.784099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.784142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.784193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.784242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.784287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.784336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.784381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.784424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.784471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.784521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.784567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.784614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.784663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.784706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.784750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.784792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.845 [2024-07-15 11:37:11.784839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.784886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.784931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.784977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.785993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.786030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.786069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.786109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.786148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.786190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.786236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.786279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.786318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.786368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.786920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.786965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.787992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.788996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.789040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.789093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.789138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.789184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.789229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.789275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.789320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.789369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.789418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.789463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.789505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.789546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.789591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.789640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.789681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.790175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.790221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.790260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.790301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.790339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.790382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.790419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.790459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.790489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.790529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.790568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.790605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.846 [2024-07-15 11:37:11.790645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.790689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.790737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.790776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.790815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.790868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.790908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.790948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.790982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.791996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.792041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.792094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.792141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.792185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.792229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.792274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.792316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.792363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.792410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.792453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.792491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.792531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.792566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.792607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.792649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.792688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.792726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.792772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.792810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.793310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.793357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.793398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.793435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.793474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.793517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.793559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.793610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.793655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.793698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.793744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.793789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.793838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.793886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.793928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.793982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.794042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.794085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.794126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.794174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.794218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.794263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.794310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.794358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.794415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.794456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.794503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.794557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.794601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.794646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.794694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.794737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.794778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.794821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.794865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.794895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.794936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.794976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.795017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.795058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.795102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.795142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.795185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.795223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.795261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.847 [2024-07-15 11:37:11.795300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.795341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.795377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.795413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.795457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.795504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.795545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.795582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.795619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.795660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.795695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.795735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.795776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.795816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.795860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.795901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.795939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.795977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.796020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.796509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.796557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.796599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.796651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.796698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.796741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.796786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.796841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.796892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.796938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.796982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.797991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.798982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.799028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.799070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.799115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.799166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.799672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.799719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.799770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.799814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.799862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.799908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.799954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.800971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.801012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.801057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.801089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.801130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.801168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.801216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.801255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.801292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.801333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.848 [2024-07-15 11:37:11.801373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.801419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.801455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.801492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.801535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.801574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.801616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.801657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.801697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.801734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.801776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.801815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.801866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.801906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.801951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.801998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.802057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.802104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.802150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.802197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.802246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.802295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.802343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.802386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.802435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.802925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.802970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.803985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.804025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.804062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.804101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.804132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.804167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.804211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.804252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.804297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.804346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.804393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.804441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.804493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.804543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.804591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.804637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.804685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.804731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.804776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.804822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.804872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.804918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.804965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.805013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.805062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.805110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.805153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.805203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.805249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.805294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.805347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.805403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.805457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.805505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.805551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.805598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.805646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.805692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.805735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.806216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.806261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.806299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.806339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.806380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.806420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.806459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.806498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.806534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.806576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.806620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.806660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.806705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.806738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.806776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.806813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.806871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.806914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.806952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.806991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.807034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.807075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.807118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.807156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.807197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.807242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.807290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.807336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.807381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.849 [2024-07-15 11:37:11.807429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.807477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.807531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.807579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.807626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.807674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.807720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.807765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.807810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.807862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.807920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.807966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.808014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.808060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.808106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.808153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.808200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.808245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.808292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.808342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.808394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.808443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.808498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.808545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.808592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.808638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.808687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.808729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.808777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.808818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.808865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.808909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.808951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.808986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.809025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.809476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.809518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.809559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.809608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.809650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.809690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.809728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.809769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.809812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.809862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.809909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.809953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.809994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.810032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.810074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.810112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.810150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.810193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.810236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.810286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.810334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.810379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.810426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.810474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.810517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.810564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.810617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.810667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.810715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.810760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.810805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.810854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.810898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.810946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.810995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.811992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.812033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.812074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.812116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.812158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.812647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.812698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.812745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.812792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.812846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:43.850 [2024-07-15 11:37:11.812892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.812946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.812991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.813039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.813085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.813132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.813175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.813219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.850 [2024-07-15 11:37:11.813267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.813315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.813359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.813405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.813455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.813501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.813548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.813594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.813643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.813688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.813735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.813780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.813823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.813878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.813922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.813966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.814966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.815009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.815051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.815093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.815135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.815175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.815224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.815255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.815294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.815332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.815374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.815413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.815452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.815641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.816013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.816063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.816109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.816155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.816202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.816256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.816313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.816361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.816410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.816460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.816508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.816555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.816598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.816648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.816694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.816740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.816798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.816850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.816897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.816943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.816988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.817990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.818032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.818074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.818116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.818155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.818196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.818241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.851 [2024-07-15 11:37:11.818287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.818326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.818369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.818408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.818446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.818493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.818551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.818600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.818652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.818699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.818747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.818795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.819302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.819351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.819400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.819448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.819493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.819544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.819594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.819642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.819694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.819741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.819789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.819839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.819885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.819930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.819980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.820025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.820074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.820119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.820165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.820211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.820257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.820307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.820355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.820402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.820451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.820498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.820544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.820587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.820628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.820670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.820711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.820744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.820787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.820827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.820878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.820921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.820967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.821964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.822007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.822047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.822086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.822566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.822614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.822661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.822710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.822744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.822782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.822819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.822863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.822917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.822958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.823953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.824000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.824053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.824096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.824143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.824188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.824234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.852 [2024-07-15 11:37:11.824281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.824328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.824373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.824422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.824468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.824519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.824573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.824627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.824673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.824722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.824768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.824816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.824866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.824915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.824965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.825012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.825058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.825106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.825154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.825200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.825248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.825293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.825341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.825383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.825856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.825897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.825936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.825974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.826977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.827019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.827058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.827099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.827140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.827176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.827225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.827274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.827319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.827369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.827417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.827462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.827508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.827557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.827604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.827659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.827705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.827754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.827798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.827848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.827892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.827938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.827985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.828030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.828080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.828122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.828170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.828220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.828267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.828313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.828361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.828410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.828452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.828499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.828537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.828570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.828609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.829090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.829131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.829169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.829209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.829248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.829288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.829324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.829366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.829410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.829453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.829498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.829537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.829576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.829619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.829660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.829700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.829739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.829782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.829828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.829880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.829924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.829974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.830023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.853 [2024-07-15 11:37:11.830075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.830122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.830168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.830212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.830256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.830290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.830330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.830377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.830417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.830455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.830504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.830543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.830581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.830619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.830656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.830708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.830757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.830806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.830861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.830907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.830956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.830998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.831047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.831094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.831137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.831183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.831228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.831280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.831330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.831381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.831430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.831474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.831520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.831566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.831610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.831650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.831697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.831736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.831782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.831822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.831877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.832429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.832472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.832513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.832553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.832595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.832634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.832674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.832715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.832757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.832805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.832859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.832906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.832952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.833002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.833050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.833093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.833136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.833179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.833222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.833268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.833313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.833361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.833407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.833453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.833498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.833543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.833590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.833642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.833686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.833734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.833778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.833826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.833877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.833924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.833969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.834013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.834066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.834116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.834166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.834213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.834259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.834309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.834354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.834402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.834448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.834495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.834554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.834607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.834655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.834696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.834728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.834770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.834811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.834860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.834899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.834937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.834976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.835019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.835064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.835106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.854 [2024-07-15 11:37:11.835149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.835185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.835226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.835681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.835724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.835761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.835802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.835845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.835886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.835925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.835967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.836006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.836051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.836090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.836129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.836169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.836212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.836252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.836291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.836337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.836386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.836429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.836476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.836536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.836588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.836639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.836684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.836728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.836776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.836821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.836872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.836920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.836968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.837019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.837066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.837115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.837163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.837211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.837260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.837304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.837350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.837395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.837442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.837493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.837549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.837594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.837640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.837674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.837717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.837764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.837806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.837850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.837895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.837938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.837988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.838030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.838076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.838115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.838147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.838190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.838231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.838279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.838320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.838367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.838404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.838452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.838491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.839081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.839131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.839183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.839242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.839291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.839336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.839387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.839433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.839481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.839529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.839573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.839621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.839665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.839717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.839764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.839815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.839872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.839918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.839970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.840015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.840063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.840110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.840155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.840203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.840250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.840295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.840344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.840393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.840441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.840490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.840538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.840589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.840646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.840691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.840734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.840781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.840835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.840882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.840923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.840965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.841006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.841047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.841079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.841121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.855 [2024-07-15 11:37:11.841162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.841208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.841250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.841292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.841335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.841378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.841419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.841463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.841508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.841547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.841585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.841627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.841668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.841710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.841753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.841798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.841845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.841889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.841926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.842472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.842521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.842576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.842628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.842676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.842724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.842768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.842816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.842868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.842914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.842959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.843970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.844976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.845026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.845077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.845131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.845175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.845219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.845265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.845314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.845802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.845853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.845899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.845947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.845993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.846959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.847007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.847049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.847091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.847131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.847178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.847221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.847261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.847293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.847336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.847374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.847416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.847457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.847498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.847538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.847581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.856 [2024-07-15 11:37:11.847622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.847667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.847708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.847750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.847794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.847847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.847892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.847943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.847991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.848034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.848081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.848131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.848184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.848237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.848286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.848342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.848389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.848433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.848484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.848529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.849993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.850035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.850075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.850118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.850157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.850199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.850239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.850281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.850327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.850368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.850410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.850450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.850494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.850534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.850577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.850626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.850676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.850724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.850780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.850827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.850882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.850929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.850977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.851022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.851067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.851117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.851173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.851221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.851266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.851309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.851356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.851402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.851449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.851491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.851531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.851575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.851616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.851654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.851699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.851741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.851789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.852263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.852309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.852352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.852393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.852431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.852473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.852513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.852553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.852593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.852630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.852670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.852714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.852761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.852811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.852863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.852915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.852969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.853015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.853062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.853108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.853153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.853204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.853251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.853299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.853347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.853395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.853442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.853493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.853543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.853591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.853636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.853683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.853731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.853775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.853822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.857 [2024-07-15 11:37:11.853875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.853922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.853969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.854965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.855007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.855054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.855569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.855610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.855649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.855687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.855725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.855773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.855821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.855868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.855914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.855959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.856005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.856049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.856097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.856140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.856187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.856237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.856282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.856329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.856378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.856427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.856475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.856524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.856577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.856627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.856672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.856719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.856767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.856813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.856863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.856914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.856960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857835] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.857990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.858029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.858075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.858119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.858161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.858208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.858245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.858288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.858332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.858376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.858918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.858973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.859020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.859065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.859109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.859156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.859204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.859255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.859305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.859359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.859404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.859452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.859499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.859549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.859593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.859638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.859684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.859731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.859776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.859828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.858 [2024-07-15 11:37:11.859869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.859906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.859951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.859989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.860962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.861006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.861041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.861091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.861148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.861193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.861239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.861289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.861335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.861385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.861432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.861476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.861524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.861574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.861619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.861668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.861716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.862202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.862254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.862311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.862358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.862401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.862445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.862491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.862533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.862579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.862633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.862678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.862727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.862771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.862815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.862858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.862901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.862941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.862981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.863994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.864035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.864078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.864119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.864160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.864204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.864250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.864300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.864347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.864397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.864443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.864491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.864535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.864584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.864630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.864682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.864727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.864775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.864820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.864877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.864925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.864974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.865020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:43.859 [2024-07-15 11:37:11.865518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.865562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.865601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.865642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.865680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.865724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.865766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.865805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.865851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.865898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.865937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.865976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.866015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.866049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.866092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.859 [2024-07-15 11:37:11.866131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.866177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.866220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.866269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.866307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.866350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.866389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.866427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.866468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.866511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.866553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.866594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.866630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.866670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.866711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.866752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.866799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.866849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.866896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.866943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.866990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.867041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.867085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.867129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.867178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.867232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.867285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.867331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.867378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.867426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.867475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.867519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.867564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.867609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.867658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.867708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.867761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.867814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.867864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.867910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.867954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.867993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.868031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.868072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.868110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.868151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.868199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.868239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.868696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.868739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.868784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.868826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.868876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.868920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.868963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.869008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.869050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.869092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.869137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.869180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.869219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.869261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.869308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.869355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.869409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.869452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.869499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.869548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.869593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.869638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.869682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.869727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.869775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.869826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.869883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.869928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.869975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.870984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.871029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.871074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.871121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.871168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.871212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.871261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.871307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.871353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.871402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.871450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.871492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.871992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.872041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.872086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.872132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.872182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.872238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.872283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.872329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.872378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.872424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.872474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.872520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.872568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.860 [2024-07-15 11:37:11.872616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.872662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.872708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.872756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.872803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.872856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.872899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.872938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.872978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.873980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.874023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.874062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.874103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.874155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.874200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.874242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.874283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.874320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.874358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.874400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.874436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.874482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.874536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.874590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.874641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.874688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.874734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.875211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.875258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.875301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.875345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.875386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.875424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.875469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.875514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.875558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.875597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.875640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.875676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.875714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.875751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.875793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.875837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.875881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.875927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.875971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.876990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.877036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.877084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.877135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.877180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.877229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.877275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.877318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.877361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.877404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.877443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.877492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.877535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.877577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.877616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.877659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.877701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.877747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.877788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.877836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.877878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.877916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.877953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.878450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.878500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.878549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.878601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.861 [2024-07-15 11:37:11.878656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.878710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.878756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.878802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.878860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.878908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.878966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.879013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.879062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.879106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.879153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.879199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.879252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.879297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.879345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.879391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.879438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.879485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.879532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.879581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.879627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.879674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.879719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.879765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.879810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.879863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.879913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.879960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.880969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.881011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.881053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.881090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.881131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.881170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.881213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.881255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.881297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.881789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.881831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.881878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.881917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.881958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.882989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.883032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.883075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.883115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.883147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.883192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.883235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.883281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.883330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.883384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.883436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.862 [2024-07-15 11:37:11.883485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.883530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.883576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.883628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.883678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.883725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.883770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.883813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.883864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.883914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.883959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.884006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.884061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.884108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.884156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.884204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.884250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.884297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.884344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.884391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.884439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.884490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.884975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.885955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.886004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.886053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.886099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.886143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.886189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.886236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.886281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.886328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.886378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.886427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.886473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.886522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.886568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.886615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.886662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.886706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.886755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.886800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.886854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.886900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.886955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.887008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.887058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.887110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.887170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.887221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.887278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.887326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.887384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.887433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.887479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.887525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.887566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.887609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.887644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.887688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.887726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.887768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.887808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.888268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.888315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.888357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.888394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.888435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.888477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.888524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.888566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.888610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.888649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.888692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.888732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.888772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.888816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.888862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.888904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.888949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.888994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.889042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.889086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.889132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.889186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.889235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.889281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.889329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.889377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.889423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.889473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.889518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.889564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.889620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.889672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.889718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.889761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.889794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.889844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.889891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.889931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.889978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.890023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.890066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.863 [2024-07-15 11:37:11.890100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.890144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.890190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.890231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.890278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.890326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.890368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.890409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.890454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.890494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.890537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.890582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.890622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.890664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.890705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.890748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.890789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.890830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.890874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.890913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.890958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.891006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.891053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.891556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.891621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.891668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.891714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.891763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.891808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.891858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.891904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.891955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.892003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.892047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.892099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.892146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.892199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.892247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.892296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.892346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.892390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.892437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.892484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.892528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.892575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.892622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.892674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.892717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.892764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.892813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.892862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.892908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.892957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.893988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.894028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.894065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.894106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.894148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.894191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.894237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.894279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.894320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.894363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.894407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.894903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.894950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.894988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.895035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.895081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.895120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.895159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.895200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.895247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.895288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.895333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.895385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.895435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.895483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.895532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.895582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.895628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.895676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.895720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.895764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.895812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.895862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.895909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.895963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.896008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.896065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.896112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.896152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.896193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.896233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.896273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.896318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.896354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.896394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.896433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.896474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.896518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.896550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.864 [2024-07-15 11:37:11.896593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.896633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.896680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.896718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.896758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.896797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.896844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.896887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.896933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.896973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.897017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.897054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.897101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.897146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.897191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.897248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.897297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.897346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.897392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.897440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.897489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.897536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.897586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.897636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.897683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.897729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.898218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.898270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.898313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.898366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.898416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.898473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.898524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.898570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.898617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.898664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.898709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.898755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.898799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.898848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.898893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.898934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.898976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.899983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.900020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.900064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.900107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.900151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.900201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.900251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.900297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.900346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.900392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.900438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.900483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.900531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.900582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.900628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.900676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.900728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.900778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.900824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.900876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.900922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.900956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.900995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.901464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.901511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.901544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.901585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.901628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.901673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.901715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.901756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.901797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.901842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.901887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.901929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.901970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.902008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.902055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.902103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.902155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.902199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.902242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.902290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.865 [2024-07-15 11:37:11.902344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.902389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.902437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.902485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.902533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.902580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.902632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.902676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.902722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.902768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.902813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.902861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.902906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.902946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.902989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.903033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.903066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.903107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.903147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.903188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.903229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.903268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.903312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.903352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.903393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.903430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.903476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.903526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.903577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.903634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.903681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.903728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.903776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.903821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.903875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.903922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.903966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.904013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.904061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.904112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.904160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.904207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.904253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.904299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.904794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.904852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.904897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.904936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.904989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.905964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.906006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.906049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.906097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.906151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.906197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.906245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.906293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.906339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.906387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.906436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.906484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.906528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.906577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.906625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.906674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.906718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.906766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.906812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.906866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.906919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.906964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.907003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.907051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.907093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.907132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.907171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.907212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.907253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.907302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.907346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.907388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.907427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.907469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.907501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.907542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.908137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.908181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.908222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.908256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.908308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.908353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.908397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.908445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.908491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.908537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.908585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.908629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.908680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.908735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.908780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.908826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.908879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.866 [2024-07-15 11:37:11.908924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.908971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.909018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.909068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.909115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.909163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.909215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.909264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.909311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.909357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.909401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.909450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.909494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.909540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.909598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.909646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.909693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.909742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.909790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.909841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.909889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.909931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.909971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.910956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.911481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.911533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.911580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.911625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.911687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.911735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.911780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.911830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.911884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.911933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.911978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.912026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.912077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.912125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.912172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.912220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.912266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.912313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.912359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.912405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.912451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.912495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.912541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.912588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.912635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.912684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.912738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.912791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.912847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.912897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.912950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.912998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.913972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.914015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.914058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.914100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.914145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.914193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.914232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.914269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.914318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.914809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.914864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.914913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.867 [2024-07-15 11:37:11.914961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.915972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.916011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.916053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.916100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.916150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.916199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.916250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.916296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.916342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.916394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.916443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.916493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.916547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.916594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.916636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.916681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.916731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.916777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.916827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.916882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.916929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.916979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.917026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.917071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.917121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.917173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.917222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.917266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.917310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.917342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.917379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.917416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.917459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.917500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.917542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.917583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.917628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.918110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:43.868 [2024-07-15 11:37:11.918160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.918205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.918248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.918288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.918331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.918371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.918413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.918453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.918497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.918537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.918582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.918626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.918671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.918715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.918762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.918813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.918868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.918918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.918970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.919997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.920037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.920079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.920119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.920165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.920207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.920251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.920291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.920330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.920378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.920432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.920484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.920534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.920583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.920635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.920679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.920726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.868 [2024-07-15 11:37:11.920774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.920820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.920873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.920919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.921410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.921462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.921512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.921566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.921611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.921657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.921705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.921754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.921798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.921848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.921898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.921944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.921992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.922042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.922103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.922151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.922199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.922248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.922296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.922342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.922386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.922434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.922485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.922530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.922570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.922610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.922649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.922691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.922736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.922778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.922827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.922872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.922913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.922956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.922997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.923035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:43.869 [2024-07-15 11:37:11.923080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.923124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.923175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.923215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.923256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.923303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.923341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.923382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.923421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.923467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.923512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.923557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.923605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.923647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.923681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.923722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.923766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.923808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.923855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.923898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.923938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.923977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.924016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.924055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.924096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.924136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.924183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.924233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.924703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.924748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.924780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.924816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.924852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.924884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.924915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.924944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.924978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.925977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.926026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.926071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.926120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.926165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.926212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.926265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.926315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.926363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.926411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.926451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.926493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.158 [2024-07-15 11:37:11.926532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.926576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.926622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.926664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.926710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.926746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.926792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.926837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.926881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.926925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.926971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.927018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.927063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.927108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.927155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.927203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.927250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.927301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.927352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.927399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.927889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.927943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.927989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.928034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.928081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.928124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.928172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.928221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.928268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.928318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.928371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.928417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.928462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.928509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.928559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.928607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.928656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.928700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.928749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.928794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.928845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.928887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.928936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.928989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.929037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.929086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.929136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.929179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.929225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.929274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.929317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.929357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.929407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.929450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.929494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.929538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.929586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.929626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.929660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.929698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.929744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.929787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.929830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.929877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.929918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.929958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.930006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.930050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.930092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.930124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.930166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.930203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.930248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.930292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.930333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.159 [2024-07-15 11:37:11.930373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.930412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.930459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.930502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.930537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.930582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.930622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.930664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.930708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.930904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.931279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.931313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.931352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.931398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.931429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.931461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.931501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.931543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.931587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.931632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.931674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.931712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.931747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.931777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.931806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.931842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.931891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.931935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.931974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.932015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.932056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.932098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.932148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.932193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.932238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.932287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.932336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.932390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.932442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.932489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.932536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.932584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.932629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.932675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.932724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.932773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.932822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.932870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.932919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.932972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.933031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.933085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.933141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.933197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.933243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.933292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.933339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.933387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.933433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.933479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.933526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.933574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.933630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.933675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.933723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.933767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.933813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.933859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.933904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.933946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.160 [2024-07-15 11:37:11.933988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.934033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.934073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.934534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.934577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.934617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.934650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.934689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.934725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.934777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.934821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.934873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.934918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.934969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.935016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.935063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.935111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.935159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.935212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.935266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.935317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.935363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.935411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.935457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.935501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.935546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.935596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.935645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.935691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.935738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.935782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.935827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.935882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.935933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.935984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.936982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.937025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.937069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.937110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.937151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.937188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.937228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.937269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.937307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.937346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.937528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.937934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.937973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.938017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.161 [2024-07-15 11:37:11.938048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.938990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.939019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.939056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.939097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.939136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.939176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.939216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.939259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.939292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.939336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.939384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.939431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.939480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.939528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.939576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.939621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.939672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.939731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.939779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.939827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.939880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.939926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.939978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.940026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.940073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.940120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.940169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.940218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.940266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.940314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.940362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.940866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.940916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.940969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.941016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.941060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.941108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.941154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.941198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.941242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.941290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.941336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.162 [2024-07-15 11:37:11.941387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.941433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.941480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.941520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.941564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.941612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.941652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.941692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.941735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.941772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.941808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.941853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.941896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.941949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.941989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.942974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.943021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.943074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.943128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.943174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.943222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.943268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.943315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.943363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.943409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.943459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.943515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.943562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.943610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.943659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.943704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.943891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.944242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.944291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.944337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.944380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.944424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.944466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.944506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.944542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.944581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.944625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.944666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.944705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.944742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.944780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.944824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.944871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.163 [2024-07-15 11:37:11.944914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.944948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.944980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.945025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.945065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.945108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.945155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.945201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.945245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.945289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.945336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.945379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.945429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 true 00:10:44.164 [2024-07-15 11:37:11.945480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.945527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.945576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.945618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.945663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.945707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.945755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.945804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.945859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.945906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.945946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.945987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.946029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.946070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.946112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.946152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.946192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.946239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.946274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.946317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.946358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.946398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.946437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.946477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.946523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.946564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.946609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.946651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.946691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.946731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.946777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.946821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.946874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.947374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.947420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.947469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.947516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.947564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.947618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.164 [2024-07-15 11:37:11.947664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.947707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.947755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.947801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.947858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.947907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.947958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.948982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.949994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.950041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.950088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.950138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.950182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.950359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.950407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.950770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.950817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.950863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.950906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.950946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.950993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.951029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.951066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.951112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.951152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.951191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.951232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.951276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.165 [2024-07-15 11:37:11.951319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.951359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.951401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.951449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.951496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.951539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.951580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.951618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.951662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.951704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.951738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.951779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.951817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.951867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.951916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.951963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.952007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.952053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.952095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.952142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.952192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.952244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.952288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.952335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.952384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.952430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.952475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.952520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.952567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.952613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.952662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.952709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.952757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.952808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.952859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.952907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.952954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.953008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.953053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.953102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.953147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.953195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.953247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.953298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.953339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.953379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.953421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.953465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.953504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.953983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.954034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.954076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.954113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.954154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.954199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.954241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.954282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.954325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.954364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.954408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.954449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.954496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.954541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.954586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.954633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.954677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.954721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.954769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.954816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.954872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.954919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.954971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.166 [2024-07-15 11:37:11.955019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.955068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.955114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.955162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.955208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.955253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.955303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.955350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.955396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.955448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.955505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.955548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.955595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.955648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.955695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.955741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.955788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.955840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.955888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.955934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.955981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.956032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.956077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.956122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.956169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.956212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.956258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.956299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.956343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.956385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.956420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.956459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.956498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.956538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.956579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.956619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.956668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.956709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.956751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.956793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.956837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.957047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.957087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.957765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.957809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.957858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.957900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.957947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.957992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.958037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.958081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.958128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.958178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.958229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.958284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.958336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.958394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.958445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.958492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.958538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.958588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.958637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.958684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.958732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.958773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.958808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.958855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.958899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.958942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.958980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.959020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.959065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.959097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.959137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.959177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.959219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.959263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.959312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.959355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.959393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.959432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.959473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.959517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.959560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.959603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.959647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.959687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.959730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.167 [2024-07-15 11:37:11.959770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.959810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.959862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.959912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.959962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.960010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.960058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.960106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.960159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.960205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.960252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.960295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.960341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.960390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.960432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.960482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.960694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.960748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.960804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.960857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.960905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.960954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.961978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.962968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.963013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.963061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.963108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.963158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.963206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.963257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.963309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.963366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.963419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.963475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.963523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.963569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.963741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.963783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.964165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.964211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.964252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.964292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.964333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.964378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.964412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.964449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.964486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.964529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.964571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.964612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.964655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.964692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.168 [2024-07-15 11:37:11.964738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.964779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.964822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.964871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.964905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.964944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.964987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.965034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.965078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.965125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.965175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.965224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.965274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.965328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.965385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.965433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.965482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.965530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.965577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.965624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.965673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.965725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.965773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.965821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.965877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.965926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.965976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.966014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.966057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.966099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.966139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.966183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.966224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.966256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.966299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.966339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.966379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.966418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.966459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.966500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.966543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.966581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.966621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.966662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.966701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.966739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.966778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.966817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.967297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.967348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.967395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.967439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.967486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:44.169 [2024-07-15 11:37:11.967533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.967580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.967626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.967673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.967718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.967766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.967811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.967863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.967910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.967958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.968014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.968068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.968116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.968172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.968219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.968270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.968317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.968361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.968407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.968456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.968500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.968550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.968596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.968644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.968689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.968737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.968791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.968840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.968886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.968935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.968985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.969029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.969072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.969116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.969165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.169 [2024-07-15 11:37:11.969210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.969251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.969294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.969328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.969371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.969411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.969451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.969490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 11:37:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:44.170 [2024-07-15 11:37:11.969531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.969574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.969615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.969658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.969697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.969742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.969780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.969817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 11:37:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.170 [2024-07-15 11:37:11.969873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.969915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.969961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.970002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.970047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.970089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.970128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.970186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.970361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.970408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.970804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.970857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.970899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.970940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.970981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.971028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.971070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.971108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.971153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.971205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.971256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.971312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.971361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.971413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.971461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.971509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.971564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.971616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.971668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.971714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.971758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.971808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.971865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.971911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.971960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.972976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.973018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.973057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.973097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.973139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.973182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.973229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.973277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.973327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.973381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.973427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.973472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.973519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.974010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.974060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.974109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.974157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.974207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.974259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.974310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.974356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.974402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.974450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.974496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.974546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.974591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.974639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.974687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.974747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.170 [2024-07-15 11:37:11.974793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.974848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.974897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.974943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.974986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.975967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.976007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.976045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.976087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.976128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.976168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.976216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.976256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.976288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.976329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.976371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.976412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.976452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.976493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.976533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.976574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.976619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.976660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.976702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.976745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.976784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.976970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.977024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.977076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.977729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.977780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.977837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.977879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.977915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.977961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.978967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.979009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.979045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.979089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.979133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.979184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.979234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.979285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.979341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.979392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.979440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.979483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.979530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.979579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.979624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.979673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.979721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.979770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.979820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.979874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.979922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.979972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.980019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.980067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.980112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.980159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.171 [2024-07-15 11:37:11.980205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.980255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.980306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.980359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.980407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.980453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.980508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.980556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.980605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.980652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.980837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.980888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.980941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.980988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.981977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.982988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.983028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.983068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.983107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.983159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.983204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.983251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.983300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.983351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.983398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.983444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.983495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.983546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.984963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.985010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.985059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.985107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.985157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.985201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.985248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.985295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.985339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.985387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.172 [2024-07-15 11:37:11.985432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.985481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.985527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.985575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.985631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.985675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.985714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.985756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.985795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.985844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.985885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.985928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.985965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.986006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.986041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.986080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.986119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.986160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.986201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.986238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.986284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.986331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.986378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.986426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.986471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.986523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.986567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.986612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.986657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.986702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.986752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.986802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.987314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.987365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.987417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.987466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.987512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.987562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.987608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.987653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.987700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.987744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.987790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.987844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.987894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.987942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.987986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.988999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.989041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.989087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.989128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.989161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.989201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.989243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.989289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.989330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.989374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.989415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.989461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.989499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.989534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.989572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.989613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.989652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.989693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.989737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.989776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.989815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.173 [2024-07-15 11:37:11.989862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.989902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.989944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.989982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.990032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.990087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.990555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.990600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.990646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.990688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.990725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.990766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.990805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.990851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.990892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.990925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.990963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.991984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.992031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.992078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.992122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.992174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.992230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.992275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.992318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.992363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.992410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.992459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.992504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.992554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.992602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.992646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.992692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.992744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.992791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.992840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.992892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.992938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.992985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.993030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.993080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.993130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.993177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.993228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.993274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.993316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.993364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.993412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.993896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.993940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.993982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.994981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.995019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.995060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.995103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.995144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.995186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.995233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.995278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.995323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.995370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.174 [2024-07-15 11:37:11.995419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.995467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.995513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.995564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.995613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.995656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.995702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.995749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.995797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.995849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.995897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.995951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.995997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.996042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.996089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.996131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.996172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.996204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.996245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.996287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.996328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.996367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.996414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.996455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.996504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.996548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.996588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.996620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.996658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.997532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.997584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.997634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.997679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.997728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.997778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.997822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.997873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.997918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.997964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.998956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:11.999981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:12.000026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:12.000075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:12.000120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:12.000165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:12.000210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:12.000257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:12.000454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:12.000500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:12.000545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:12.000588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:12.000637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:12.000682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:12.000727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:12.000779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:12.000821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:12.000878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:12.000927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:12.000970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:12.001008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.175 [2024-07-15 11:37:12.001050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.001976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.002015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.002056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.002095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.002137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.002183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.002232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.002276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.002322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.002368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.002414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.002464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.002507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.002548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.002593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.002648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.002693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.002736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.002780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.002828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.002885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.002928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.002970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.003016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.003050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.003090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.003128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.003596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.003639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.003677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.003720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.003761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.003805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.003851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.003894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.003931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.003970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.004958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.176 [2024-07-15 11:37:12.005993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.006033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.006072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.006112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.006152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.006184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.006225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.006269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.006728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.006775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.006814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.006866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.006907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.006946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.006987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.007027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.007068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.007107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.007145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.007187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.007231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.007281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.007326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.007373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.007419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.007466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.007512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.007557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.007601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.007653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.007695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.007740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.007786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.007838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.007883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.007929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.007973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.008981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.009024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.009063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.009101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.009137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.009175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.009213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.009255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.009304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.009343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.009381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.009428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.009460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.009950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.009999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.010039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.010079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.010126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.010167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.010207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.010250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.010289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.010329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.010369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.010410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.010447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.010495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.010542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.010595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.010645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.010694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.010738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.010784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.010841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.010891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.010939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.010983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.011031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.011080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.011127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.011181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.011230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.011277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.011323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.011367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.011415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.177 [2024-07-15 11:37:12.011462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.011511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.011560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.011607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.011653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.011699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.011747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.011794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.011846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.011890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.011937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.011985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.012030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.012071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.012115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.012147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.012189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.012227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.012266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.012304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.012347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.012388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.012426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.012465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.012505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.012545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.012591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.012624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.012664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.012704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.013279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.013320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.013357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.013397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.013434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.013481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.013525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.013571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.013614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.013660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.013704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.013753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.013799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.013850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.013896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.013943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.013990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.014037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.014082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.014125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.014173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.014219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.014271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.014315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.014347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.014388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.014429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.014491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.014533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.014577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.014619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.014671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.014708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.014746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.014790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.014843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.014890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.014933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.014970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.015983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.178 [2024-07-15 11:37:12.016033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.016081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.016578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.016625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.016671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.016718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.016770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.016819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.016874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.016920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.016963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.017973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.018973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.019012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.019053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.019098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.019144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.019190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.019237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.019280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.019764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:44.179 [2024-07-15 11:37:12.019817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.019872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.019917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.019964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.020993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.021041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.021089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.021135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.021184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.021231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.021283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.021330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.021379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.021433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.179 [2024-07-15 11:37:12.021484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.021533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.021580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.021626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.021674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.021723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.021769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.021814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.021865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.021914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.021963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.022015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.022060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.022109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.022156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.022203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.022252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.022298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.022343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.022388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.022431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.022471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.022511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.022546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.022588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.023069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.023113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.023149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.023194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.023230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.023273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.023314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.023355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.023396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.023443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.023486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.023525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.023567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.023610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.023649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.023687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.023726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.023760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.023806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.023864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.023917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.023964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.024994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.025039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.025089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.025136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.025182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.025228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.025276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.025317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.025368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.025409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.025455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.025496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.025529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.025571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.025607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.025649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.025687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.025729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.026257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.026311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.026358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.026402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.026447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.026489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.026534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.026576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.026623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.026671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.026726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.026772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.026818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.026871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.026920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.026971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.027019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.180 [2024-07-15 11:37:12.027067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.027112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.027161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.027209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.027270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.027321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.027368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.027412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.027459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.027508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.027551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.027600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.027644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.027697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.027742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.027790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.027842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.027892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.027941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.027987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.028974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.029021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.029064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.029099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.029643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.029687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.029724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.029764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.029811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.029863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.029912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.029959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.030984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.031972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.032019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.032069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.032123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.032175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.181 [2024-07-15 11:37:12.032221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.032266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.032313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.032362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.032410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.032917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.032966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.033010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.033053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.033099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.033145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.033194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.033240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.033288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.033330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.033376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.033422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.033469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.033519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.033570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.033616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.033672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.033727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.033778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.033824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.033877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.033926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.033974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.034982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.035026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.035074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.035117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.035157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.035200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.035242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.035281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.035318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.035356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.035397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.035440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.035480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.035520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.035569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.035609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.035650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.035689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.036209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.036260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.036307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.036357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.036404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.036450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.036502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.036547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.036598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.036646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.036692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.036738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.036785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.036844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.036896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.036935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.036979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.037019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.037057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.037098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.037144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.037185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.037226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.182 [2024-07-15 11:37:12.037266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.037305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.037344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.037390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.037432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.037477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.037520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.037563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.037605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.037647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.037689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.037731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.037771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.037811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.037855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.037897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.037942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.037979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.038024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.038072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.038119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.038165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.038210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.038261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.038305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.038361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.038405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.038450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.038500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.038544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.038590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.038636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.038686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.038737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.038783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.038838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.038884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.038932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.038981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.039031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.039519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.039566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.039608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.039649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.039689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.039727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.039766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.039812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.039866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.039903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.039939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.039982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.040968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.041017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.041064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.041109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.041155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.041201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.041249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.041295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.041348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.041398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.041443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.041491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.041539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.041583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.041624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.041669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.041723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.041775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.041824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.041876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.041923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.041969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.042015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.042062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.042103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.042142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.042181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.042213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.042254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.042294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.042479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.183 [2024-07-15 11:37:12.042820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.042871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.042913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.042954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.042993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.043997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.044038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.044077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.044116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.044152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.044194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.044238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.044282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.044325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.044366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.044405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.044448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.044492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.044536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.044583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.044631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.044684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.044732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.044782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.044838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.044887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.044938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.044990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.045033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.045077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.045122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.045168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.045214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.045264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.045310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.045361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.045406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.045452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.045500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.045547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.045594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.046093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.046146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.046191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.046238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.046286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.046335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.046381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.046430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.046471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.046513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.046558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.046599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.046638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.046679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.046720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.046766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.046803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.046848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.046890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.046939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.046977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.047020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.047059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.047099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.047138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.047180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.047228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.047265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.047302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.047340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.047382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.047424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.047466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.047519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.047562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.047605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.047648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.047689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.047730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.047776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.047815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.047861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.184 [2024-07-15 11:37:12.047899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.047935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.047988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.048036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.048085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.048135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.048179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.048226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.048272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.048320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.048365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.048411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.048458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.048505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.048550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.048596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.048640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.048685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.048732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.048780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.048815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.048863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.049039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.049395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.049432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.049474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.049514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.049554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.049596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.049636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.049680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.049721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.049761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.049805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.049855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.049897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.049934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.049970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.050013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.050058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.050106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.050156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.050203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.050249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.050297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.050347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.050396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.050437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.050479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.050517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.050555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.050595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.050637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.050675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.050714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.050752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.050799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.050851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.050902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.050951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.051000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.051045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.051104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.051148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.051194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.051241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.051285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.051340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.051388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.051431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.051478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.051522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.051572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.051625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.051670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.051716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.051763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.051807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.051860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.051906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.051953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.052003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.052055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.052101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.052150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.052637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.052680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.052720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.052761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.052800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.052849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.052893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.052936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.052979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.053024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.053065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.053113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.053152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.053192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.053233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.053265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.053306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.053345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.053385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.053428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.185 [2024-07-15 11:37:12.053468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.053506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.053547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.053589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.053627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.053664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.053703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.053747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.053791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.053848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.053895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.053944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.053987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.054035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.054082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.054128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.054174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.054221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.054271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.054318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.054360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.054406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.054452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.054502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.054549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.054594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.054644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.054696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.054743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.054786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.054840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.054889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.054929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.054965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.055004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.055050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.055087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.055130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.055180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.055223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.055264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.055302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.055342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.055388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.055577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.055618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.056038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.056079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.056115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.056155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.056196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.056238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.056287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.056336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.056383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.056431] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.056477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.056523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.056568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.056615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.056658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.056703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.056764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.056814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.056866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.056914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.056960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.057979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.058018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.058058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.186 [2024-07-15 11:37:12.058098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.058137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.058179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.058220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.058273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.058308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.058348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.058389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.058428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.058479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.058521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.058561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.058604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.058647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.058687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.058729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.059214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.059269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.059322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.059368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.059412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.059461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.059505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.059549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.059597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.059643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.059691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.059736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.059783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.059839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.059884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.059928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.059978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.060028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.060076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.060126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.060173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.060219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.060268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.060309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.060357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.060408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.060457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.060502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.060558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.060600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.060648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.060695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.060740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.060786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.060825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.060871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.060920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.060964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.061986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.062027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.062210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.062591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.062650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.062694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.062740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.062786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.062838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.062883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.062931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.062976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.063013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.063053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.063096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.063137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.063179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.063219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.063268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.063309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.063359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.063403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.063435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.063474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.063515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.063551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.063590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.187 [2024-07-15 11:37:12.063626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.063664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.063704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.063750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.063794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.063843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.063878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.063921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.063976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.064021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.064065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.064113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.064159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.064203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.064249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.064299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.064354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.064402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.064449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.064498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.064545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.064588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.064634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.064679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.064726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.064778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.064824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.064876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.064922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.064969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.065019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.065066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.065115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.065172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.065217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.065261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.065308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.065352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.065399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.065877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.065933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.065979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.066963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.067003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.067046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.067095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.067146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.067199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.067247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.067297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.067344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.067390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.067436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.067485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.067532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.067577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.067628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.067677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.067731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.067783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.067838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.067885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.067932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.067977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.068020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.068056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.068112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.068152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.068198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.068235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.068280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.068323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.068361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.068392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.068432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.068471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.068517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.068558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.068599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.068640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.068866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.069590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.069641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.069695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.069742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.069789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.069842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.069891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.188 [2024-07-15 11:37:12.069939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.069986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.070033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.070082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.070132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.070186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.070234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.070287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.070340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.070394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.070437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.070484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.070531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.070576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.070623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.070668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.070711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.070762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.070815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.070870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.070921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.070968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.071981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.072016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.072054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.072095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.072138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.072178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.072214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.072256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.072299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.072338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.072377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.072421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.072607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:44.189 [2024-07-15 11:37:12.072659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.072715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.072763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.072811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.072863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.072907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.072953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.072999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.073052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.073098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.073145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.073193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.073242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.073290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.073334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.073380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.073424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.073470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.073516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.073563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.073623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.073670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.073718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.073765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.073816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.073872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.073919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.073965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.074993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.075031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.075071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.075118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.075160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.189 [2024-07-15 11:37:12.075197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.075238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.075278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.075320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.075360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.075404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.075915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.075966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.076012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.076057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.076107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.076156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.076204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.076251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.076295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.076344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.076392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.076439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.076488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.076537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.076581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.076614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.076658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.076701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.076742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.076782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.076824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.076875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.076920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.076960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.077985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.078034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.078083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.078129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.078176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.078223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.078268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.078312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.078354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.078398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.078443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.078494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.078545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.078592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.078636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.078683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.078728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.078920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.079263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.079312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.079360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.079405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.079449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.079491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.079530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.079565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.079601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.079638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.079679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.079717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.079762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.079802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.079852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.079900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.079952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.079993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.080031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.080071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.080106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.080147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.080190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.080228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.080266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.080306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.080349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.080390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.080428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.190 [2024-07-15 11:37:12.080471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.080512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.080550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.080589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.080628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.080673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.080716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.080771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.080822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.080874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.080917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.080964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.081011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.081054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.081110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.081157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.081202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.081249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.081282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.081322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.081363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.081406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.081454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.081493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.081533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.081565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.081602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.081644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.081687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.081733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.081771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.081811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.081860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.081901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.082439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.082490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.082538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.082585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.082633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.082683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.082728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.082772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.082819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.082870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.082918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.082968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.083023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.083069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.083115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.083160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.083208] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.083255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.083304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.083354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.083406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.083452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.083498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.083543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.083589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.083635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.083682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.083727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.083776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.083831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.083884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.083928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.083971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.084998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.085039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.085081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.085126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.085168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.085213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.085258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.085756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.085806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.085863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.085908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.085951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.085999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.086060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.086111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.086156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.086206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.086252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.086296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.086337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.086376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.086407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.086452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.086490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.086533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.086572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.086613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.086673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.086714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.086762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.086803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.191 [2024-07-15 11:37:12.086849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.086881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.086919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.086959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.086999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.087039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.087080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.087119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.087161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.087213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.087253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.087304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.087343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.087387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.087434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.087478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.087523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.087568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.087614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.087659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.087709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.087762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.087815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.087871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.087918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.087964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.088010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.088059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.088103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.088147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.088197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.088242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.088294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.088344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.088391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.088437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.088484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.088530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.088574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.088615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.089106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.089150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.089187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.089220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.089259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.089298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.089335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.089374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.089415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.089456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.089499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.089540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.089580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.089622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.089663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.089701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.089747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.089788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.089841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.089885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.089930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.089976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.090023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.090071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.090119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.090171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.090223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.090266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.090312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.090357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.090397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.090438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.090482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.090521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.090564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.090604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.090645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.090689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.090729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.090771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.090811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.090865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.090909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.090955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.091001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.091044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.091087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.091137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.091187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.091234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.091278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.091325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.091372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.091418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.091466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.091516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.091566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.091612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.091663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.091706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.091748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.091792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.091846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.091895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.092376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.092427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.092476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.092526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.092569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.092616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.092669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.092714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.092758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.092800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.092847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.092883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.092922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.092961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.093010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.093053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.093101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.093142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.093181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.093229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.192 [2024-07-15 11:37:12.093270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.093313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.093346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.093389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.093429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.093477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.093522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.093562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.093608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.093650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.093694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.093733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.093766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.093804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.093851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.093893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.093935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.093979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.094018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.094061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.094101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.094143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.094184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.094232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.094277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.094325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.094369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.094415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.094473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.094520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.094567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.094613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.094660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.094703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.094749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.094794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.094845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.094895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.094944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.094989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.095036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.095081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.095130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.095173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.095680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.095731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.095764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.095804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.095852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.095890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.095929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.095966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.096979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.097018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.097062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.097103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.097141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.097176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.097212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.097255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.097303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.097351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.097398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.097446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.097491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.097533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.097579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.097626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.097673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.097724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.097766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.097810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.097871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.097918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.097968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.098013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.098059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.098105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.098154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.098205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.098258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.098302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.098348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.098395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.098436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.098898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.098945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.098982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.099025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.099060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.099098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.099141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.099182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.099224] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.099265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.099304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.099343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.099383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.099424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.099462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.099504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.099546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.193 [2024-07-15 11:37:12.099591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.099639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.099689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.099738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.099782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.099830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.099883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.099926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.099976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.100020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.100072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.100120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.100169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.100213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.100259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.100313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.100359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.100406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.100451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.100495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.100540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.100590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.100637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.100682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.100728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.100773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.100823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.100873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.100916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.100963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.101007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.101052] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.101100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.101159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.101203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.101236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.101282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.101341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.101392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.101436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.101482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.101525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.101567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.101617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.101658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.101703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.101738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.101924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.102249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.102290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.102330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.102372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.102413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.102452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.102495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.102535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.102579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.102618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.102658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.102702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.102749] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.102796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.102849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.102900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.102956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.103004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.103050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.103100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.103149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.103203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.103251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.103298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.103347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.103393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.103438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.103484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.103536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.103583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.103633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.103688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.103733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.103780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.103826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.103878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.103921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.103962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.104999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.194 [2024-07-15 11:37:12.105036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.105550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.105591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.105639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.105683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.105723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.105768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.105814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.105862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.105909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.105959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.106008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.106057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.106111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.106157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.106207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.106252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.106315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.106367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.106413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.106458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.106502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.106549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.106596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.106644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.106689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.106724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.106766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.106809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.106854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.106900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.106938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.106983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.107960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.108008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.108056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.108107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.108157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.108204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.108248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.108296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.108345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.108393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.108577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.109250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.109300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.109345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.109396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.109439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.109483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.109526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.109567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.109611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.109650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.109694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.109738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.109770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.109811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.109858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.109900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.109943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.109982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.110987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.111033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.111082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.111129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.111184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.111234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.111279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.111330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.111377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.111423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.111466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.111512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.111561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.111604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.111650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.111695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.111736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.195 [2024-07-15 11:37:12.111768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.111806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.111852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.111893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.111935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.112125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.112172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.112218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.112250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.112288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.112326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.112368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.112408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.112449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.112488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.112530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.112571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.112609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.112655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.112698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.112743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.112788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.112842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.112890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.112939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.112985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.113029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.113077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.113123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.113173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.113223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.113274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.113328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.113377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.113427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.113474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.113521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.113565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.113607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.113656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.113700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.113733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.113776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.113813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.113859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.113902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.113947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.113985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.114024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.114068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.114106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.114147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.114189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.114227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.114267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.114308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.114357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.114399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.114440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.114479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.114515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.114566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.114615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.114663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.114712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.114759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.114808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.114859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.114910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.115095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.115458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.115505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.115556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.115601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.115649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.115693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.115739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.115785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.115837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.115884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.115933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.115978] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.116972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.117011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.117051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.117093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.117131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.117175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.117216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.117256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.117300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.117340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.117376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.117425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.117483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.117534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.117579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.117622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.117668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.117713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.117757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.117808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.117868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.117919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.117964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.196 [2024-07-15 11:37:12.118012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.118058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.118103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.118148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.118200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.118671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.118717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.118762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.118804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.118858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.118904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.118950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.118991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.119959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.120009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.120058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.120104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.120151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.120200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.120246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.120294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.120341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.120388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.120434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.120482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.120532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.120578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.120625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.120674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.120722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.120774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.120820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.120876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.120925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.120970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.121019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.121064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.121113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.121165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.121216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.121268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.121313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.121364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.121412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.121458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.121503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.121680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:44.197 [2024-07-15 11:37:12.122397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.122993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.123038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.123085] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.123133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.123184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.123239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.123288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.123336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.123383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.123430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.123489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.123537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.123586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.123632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.123679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.123728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.123776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.123824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.123873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.123922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.123968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.124018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.124070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.124116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.124161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.124207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.197 [2024-07-15 11:37:12.124254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.124301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.124343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.124390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.124435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.124484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.124523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.124566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.124615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.124651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.124688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.124728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.124771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.124813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.125280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.125321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.125359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.125398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.125436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.125473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.125515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.125563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.125600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.125638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.125685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.125732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.125778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.125823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.125874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.125922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.125976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.126982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.127025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.127063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.127105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.127147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.127188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.127235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.127277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.127323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.127369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.127416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.127467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.127513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.127559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.127607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.127658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.127706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.127759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.127810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.127870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.127919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.127966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.128010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.128061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.128110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.128670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.128720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.128770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.128822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.128874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.128918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.128952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.128994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.129969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.130007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.130047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.130090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.130132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.130175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.130217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.130257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.130299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.130339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.130373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.130422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.130469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.130522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.130569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.130618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.130663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.130710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.130758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.130804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.130856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.130904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.130953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.131010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.131060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.131110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.131156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.131204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.131249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.131291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.131339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.198 [2024-07-15 11:37:12.131383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.199 [2024-07-15 11:37:12.131428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.199 11:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.199 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.199 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.199 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:44.485 [2024-07-15 11:37:12.319880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.319940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.319986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.320980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.321022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.321077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.321119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.321164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.321207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.321247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.321298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.321342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.321386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.321434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.321482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.321533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.321576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.321621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.321664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.321711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.321758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.321804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.321855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.321902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.321946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.321991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.322034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.322081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.485 [2024-07-15 11:37:12.322127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.322169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.322202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.322245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.322284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.322318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.322356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.322400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.322442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.322489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.322528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.322575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.323988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.324035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.324080] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.324136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.324179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.324231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.324274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.324323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.324370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.324415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.324461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.324507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.324553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.324603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.324649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.324693] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.324737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.324782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.324838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.324885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.324927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.324972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.325018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.325064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.325107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.325147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.325184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.325226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.325258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.325299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.325339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.325376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.325417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.325465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.325503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.325543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.325590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.325628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.325665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.325707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.325748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.325781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.326281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.326329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.326369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.326411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.326452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.326491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.326529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.326569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.326602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.326655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.326702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.326742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.326789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.326830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.326883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.326931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.326975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.327022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.327067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.327115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.327164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.327213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.327256] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.327299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.327348] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.327402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.327449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.327496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.327542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.327590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.327635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.327683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.327724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.327775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.327806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.327854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.327895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.327932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.327971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328607] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.328971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.329460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.329509] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.329555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.329598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.329647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.329692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.329739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.329783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.329828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.329880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.329925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.329973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.330019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.330066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.330111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.330161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.330205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.330248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.330295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.330341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.330382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.330424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.330468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.330511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.330553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.486 [2024-07-15 11:37:12.330598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.330643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.330688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.330718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.330757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.330800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.330841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.330881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.330922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.330964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.331996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.332039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.332077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.332578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.332626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.332674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.332719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.332763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.332808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.332862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.332909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.332954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.332998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.333985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.334983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.335033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.335079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.335123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.335169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.335213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.335270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.335316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.335503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.335858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.335903] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.335950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.335997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.336999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.337039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.337081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.337123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.337163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.337209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.337254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.337296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.337336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.337376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.337422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.337464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.337512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.337558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.337606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.337654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.337705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.337753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.337798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.337852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.337904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.337953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.338002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.338049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.338096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.338146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.338196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.338247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.338297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.338351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.338398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.487 [2024-07-15 11:37:12.338443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.338491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.338538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.338586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.338638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.338684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.339182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.339220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.339260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.339297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.339337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.339378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.339420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.339461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.339501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.339541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.339585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.339631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.339672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.339710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.339748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.339788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.339838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.339880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.339924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.339963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340569] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.340995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.341042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.341091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.341136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.341181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.341232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.341277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.341324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.341374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.341425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.341469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.341516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.341565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.341614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.341660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.341711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.341756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.341790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.341827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.341874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.341914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.342382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.342426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.342471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.342513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.342546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.342585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.342621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.342664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.342703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.342747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.342791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.342838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.342878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.342919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.342960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.342998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.343040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.343082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.343129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.343174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.343221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.343274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.343323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.343368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.343412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.343458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.343501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.343542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.343587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.343625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.343657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.343699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.343735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.343774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.343813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.343859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.343907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.343954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.344001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.344047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.344093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.344138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.344184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.344229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.344281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.344334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.344381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.344428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.344475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.344520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.344567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.344616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.344666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.344717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.344766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.344813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.344863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.344912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.344962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.345014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.345064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.345114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.345165] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.345664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.345709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.345740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.345783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.345824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.345876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 [2024-07-15 11:37:12.345916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 Message suppressed 999 times: [2024-07-15 11:37:12.345961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.488 Read completed with error (sct=0, sc=15) 00:10:44.488 [2024-07-15 11:37:12.346006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346454] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.346997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.347049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.347103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.347149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.347196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.347249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.347297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.347345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.347388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.347433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.347481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.347534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.347578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.347627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.347673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.347720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.347765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.347807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.347855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.347905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.347953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.348002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.348046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.348090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.348125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.348166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.348207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.348246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.348286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.348327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.348368] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.348412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.348887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.348931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.348971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.349012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.349054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.349099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.349142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.349183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.349220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.349262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.349303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.349343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.349388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.349432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.349483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.349528] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.349575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.349622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.349670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.349718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.349765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.349815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.349867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.349921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.349971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.350019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.350065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.350110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.350159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.350205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.350254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.350305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.350349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.350393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.350443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.350488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.350538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.350583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.350629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.350677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.350724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.350769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.350816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.350866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.350912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.350959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.351007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.351054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.351088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.351130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.351172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.351210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.351253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.351293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.351336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.351379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 11:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:44.489 [2024-07-15 11:37:12.351428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.351466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.351508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.351549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.351586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.351627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.351678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 11:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:44.489 [2024-07-15 11:37:12.352177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.352228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.352266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.352307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.352347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.352385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.352423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.352464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.352499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.352544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.352586] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.352630] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.352675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.352719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.352761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.352811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.352862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.352910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.352954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.352998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.353040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.353094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.353150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.353192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.353234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.353276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.353331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.353378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.353422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.353469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.353513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.353558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.353609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.353655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.353702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.353750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.353796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.353848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.353897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.353940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.353984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.354027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.354077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.354121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.354159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.354198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.354246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.354286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.354316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.354360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.489 [2024-07-15 11:37:12.354405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.354445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.354483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.354531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.354578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.354618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.354664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.354702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.354741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.354779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.354809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.354850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.354888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.354935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.355458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.355503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.355543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.355581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.355622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.355657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.355698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.355744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.355791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.355843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.355888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.355939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.355982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356025] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.356967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.357973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.358022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.358066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.358108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.358153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.358653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.358703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.358752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.358799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.358851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.358895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.358942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.358991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.359036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.359079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.359123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.359171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.359214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.359258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.359301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.359351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.359404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.359448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.359499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.359543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.359587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.359631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.359679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.359725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.359762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.359801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.359842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.359880] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.359921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.359961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.360962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.361001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.361038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.361081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.361121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.361161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.361200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.361243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.361284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.361326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.361810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.361863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.361911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.361957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.362004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.362046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.362089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.362131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.362178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.362221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.362268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.362310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.362362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.490 [2024-07-15 11:37:12.362409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.362457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.362501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.362550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.362596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.362638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.362687] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.362736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.362777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.362808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.362852] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.362892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.362930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.362976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.363989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.364038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.364078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.364120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.364164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.364206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.364251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.364295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.364343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.364385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.364429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.364938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.364992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.365046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.365087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.365130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.365173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.365219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.365269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.365310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.365354] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.365398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.365446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.365490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.365539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.365583] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.365624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.365673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.365717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.365759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.365803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.365853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.365898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.365941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.365986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.366983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.367022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.367065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.367106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.367147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.367185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.367223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.367260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.367302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.367339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.367380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.367424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.367462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.367501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.367544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.367587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.367628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.367812] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.368155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.368201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.368248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.368286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.368322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.368365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.368407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.368449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.368493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.368536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.368575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.368615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.368659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.368700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.368733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.368772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.368813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.368864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.368902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.368945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.368986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.369029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.369068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.369109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.369147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.369186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.369238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.369280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.369323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.369370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.369414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.369462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.369508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.369552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.369598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.369643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.369690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.369735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.369796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.369849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.491 [2024-07-15 11:37:12.369896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.369945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.369992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.370038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.370083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.370129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.370174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.370218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.370264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.370310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.370359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.370408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.370453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.370499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.370543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.370585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.370632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.370671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.370701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.370742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.370781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.370837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.370882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.371373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.371418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.371457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.371496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.371536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.371574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.371611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.371652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.371697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.371740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.371786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.371837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.371884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.371929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.371979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.372029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.372086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.372136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.372184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.372231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.372277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.372325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.372369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.372415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.372463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.372510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.372557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.372608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.372665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.372712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.372758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.372804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.372863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.372911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.372958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.372997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373888] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.373969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.374012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.374053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.374092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.374142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.374330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.374683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.374730] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.374776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.374827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.374884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.374931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.374982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.375985] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.376024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.376071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.376113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.376156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.376199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.376238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.376283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.376330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.376377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.376425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.376475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.376527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.376577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.376624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.376672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.376719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.376766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.376816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.376870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.376913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.376962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.377006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.377048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.377093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.377140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.377190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.377243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.377288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.377333] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.377379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.377422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.377914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.377953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.377993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.378038] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.378079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.378117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.378157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.378201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.378244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.378288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.378330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.378375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.378418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.378457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.378489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.378529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.378563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.378603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.378648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.492 [2024-07-15 11:37:12.378692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.378741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.378787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.378843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.378894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.378939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.378986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.379032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.379078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.379125] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.379169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.379212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.379261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.379310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.379356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.379404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.379450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.379502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.379546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.379593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.379635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.379680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.379720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.379762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.379801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.379846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.379885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.379923] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.379962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.380003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.380044] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.380087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.380128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.380172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.380212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.380250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.380297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.380344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.380391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.380439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.380486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.380531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.380580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.380626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.380673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.380869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.381233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.381281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.381338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.381386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.381438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.381486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.381531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.381573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.381622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.381670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.381715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.381762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.381804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.381857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.381891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.381932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.381971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382149] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382361] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.382965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.383006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.383042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.383091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.383139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.383188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.383232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.383283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.383330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.383376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.383421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.383469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.383517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.383566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.383618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.383668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.383715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.383763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.383811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.383865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.383912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.383959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.383998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.384465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.384513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.384555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.384594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.384635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.384668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.384711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.384751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.384793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.384849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.384889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.384931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.384970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.385977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.386023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.386069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.386120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.386166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.386217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.493 [2024-07-15 11:37:12.386270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.386316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.386362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.386411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.386459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.386508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.386556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.386601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.386647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.386694] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.386742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.386793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.386842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.386889] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.386933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.386977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.387023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.387069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.387117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.387163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.387219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.387261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.387308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.387493] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.388114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.388159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.388211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.388253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.388302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.388344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.388378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.388425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.388473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.388508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.388549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.388587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.388626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.388667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.388710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.388751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.388800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.388846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.388892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.388939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.388986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.389027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.389071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.389118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.389162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.389210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.389254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.389306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.389349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.389395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.389439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.389485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.389534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.389580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.389624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.389669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.389716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.389764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.389811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.389864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.389913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.389971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.390017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.390062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.390106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.390154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.390194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.390230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.390270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.390310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.390351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.390391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.390429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.390469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.390513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.390554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.390608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.390648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.390686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.390722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.390761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.390800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.390848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.390885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.391068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.391114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.391152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.391191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.391230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.391276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.391324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.391379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.391429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.391476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.391524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.391572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.391617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.391662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.391708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.391758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.391805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.391859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.391907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.391954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.392993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.393030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.393073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.393112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.393153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.393196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.393243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.393287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.393328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.393370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.393408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.393453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.393497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.393550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.393601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.393650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.393696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.393746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.393790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.393842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.394338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.394388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.394436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.394482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.394529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.394575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.394623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.394666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.394712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.394755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.394788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.394828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.394872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.394918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.394961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.395007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.395049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.494 [2024-07-15 11:37:12.395095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.395135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.395182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.395220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.395260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.395301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.395340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.395383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.395421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.395459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.395498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.395537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.395582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.395626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.395666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.395713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.395751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.395791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.395830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.395873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.395912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.395952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.396000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.396047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.396095] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.396142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.396190] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.396238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.396285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.396331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.396379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.396426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.396472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.396522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.396567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.396614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.396660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.396709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.396758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.396803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.396855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.396904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.396953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.397007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.397060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.397103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.397140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.397318] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:44.495 [2024-07-15 11:37:12.397962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398307] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398930] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.398976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399030] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399126] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.399993] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.400036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.400078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.400119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.400161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.400214] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.400266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.400316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.400363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.400408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.400457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.400508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.400554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.400599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.400643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.400688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.400735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.400780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.400964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.401018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.401065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.401112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.401158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.401205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.401252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.401297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.401347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.401394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.401443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.401490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.401540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.401585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.401631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.402099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.402151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.402203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.402251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.402299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.402346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.402394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.402441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.402488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.402532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.402574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.402615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.402664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.402705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.402746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.402785] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.402829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.402877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.402916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.402956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.403001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.403042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.403092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.403132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.403179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.403220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.403264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.403306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.403345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.403386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.495 [2024-07-15 11:37:12.403426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.403475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.403516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.403556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.403597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.403638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.403680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.403721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.403762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.403795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.403841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.403885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.403926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.403970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.404014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.404054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.404089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.404128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.404166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.404205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.404237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.404273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.404314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.404353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.404395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.404435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.404467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.404504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.404546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.404588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.404625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.404660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.404690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.404735] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.404933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.404976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.405023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.405072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.405115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.405161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.405209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.405261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.405305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.405353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.405397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.405450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.405499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.405548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.405597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.405641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.405686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.405728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.405776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.405829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.405881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.405928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.405975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.406022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.406068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.406116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.406162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.406215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.406268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.406316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.406362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.406408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.406459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.406507] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.406557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.406601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.406645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.406692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.406734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.406776] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.406818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.406864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.406907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.406954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.406990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.407029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.407071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.407114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.407156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.407197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.407236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.407274] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.407323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.407365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.407408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.407445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.407482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.407522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.407561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.407603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.407642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.407683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.407725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.408234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.408290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.408343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.408393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.408439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.408483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.408532] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.408578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.408625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.408671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.408719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.408767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.408811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.408866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.408912] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.408962] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.409013] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.409062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.409110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.409159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.409205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.409252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.409295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.409340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.409388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.409445] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.409490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.409536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.409577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.409621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.409660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.409699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.496 [2024-07-15 11:37:12.409736] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.409781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.409819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.409868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.409913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.409956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410448] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410615] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410739] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.410990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.411036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.411517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.411567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.411614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.411665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.411718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.411763] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.411804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.411855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.411897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.411937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.411979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.412956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.413000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.413048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.413096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.413146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.413194] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.413243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.413291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.413342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.413390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.413439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.413488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.413533] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.413578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.413625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.413674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.413719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.413769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.413818] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.413872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.413920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.413966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.414014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.414063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.414111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.414155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.414202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.414249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.414296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.414789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.414844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.414892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.414938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.414982] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415230] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415272] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415666] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.415975] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416403] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416568] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.416979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.417020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.417063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.417109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.417154] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.417195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.417241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.417287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.417329] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.417376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.417417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.417899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.417943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.417979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.418019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.497 [2024-07-15 11:37:12.418059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418176] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418425] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.418999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.419035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.419074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.419114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.419153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.419193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.419234] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.419278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.419321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.419363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.419409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.419452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.419500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.419544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.419588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.419639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.419684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.419732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.419779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.419828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.419881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.419925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.419986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.420034] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.420078] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.420123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.420170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.420217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.420258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.420303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.420342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.420388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.420420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.420458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.420496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.421005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.421053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.421100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.421143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.421184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.421228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.421266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.421309] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.421347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.421388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.421438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.421487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.421543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.421587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.421633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.421679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.421728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.421773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.421821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.421877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.421922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.421970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.422023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.422067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.422118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.422164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.422209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.422254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.422301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.422343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.422389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.422437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.422492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.422538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.422587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.422633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.422681] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.422726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.422774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.422822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.422876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.422924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.422974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.423022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.423070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.423113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.423159] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.423205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.423251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.423299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.423347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.423393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.423438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.423483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.423527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.423571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.423614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.423655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.423696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.423728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.423772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.423811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.423854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.423893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.424341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.424391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.424430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.424475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.424514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.424558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.424600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.424641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.424674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.424713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.424752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.424805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.424854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.424895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.424936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.424974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.425014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.425053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.425092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.425131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.425170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.425215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.425269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.425315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.425359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.425405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.425449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.425500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.425546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.425593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.425638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.425684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.425733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.425779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.425829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.425878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.425924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.425970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.426016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.426064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.426108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.426157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.426201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.426248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.426292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.426334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.426378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.498 [2024-07-15 11:37:12.426411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.426449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.426492] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.426540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.426579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.426624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.426664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.426707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.426745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.426787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.426827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.426872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.426909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.426946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.426984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.427022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.427564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.427612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.427660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.427704] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.427750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.427796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.427851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.427902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.427951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.427997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.428046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.428091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.428134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.428181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.428229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.428281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.428328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.428378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.428438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.428484] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.428531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.428578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.428622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.428662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.428702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.428746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.428787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.428827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.428879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.428920] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.428968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429796] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429837] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.429960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.430008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.430054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.430099] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.430146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.430196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.430243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.430290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.430337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.430382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.430580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.430946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.430995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.431041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.431090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.431134] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.431181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.431225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.431271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.431317] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.431366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.431418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.431468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.431514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.431564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.431612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.431662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.431723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.431769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.431820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.431874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.431921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.431968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432172] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432637] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.432973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.433014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.433055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.433103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.433139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.433182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.433222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.433265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.433308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.433349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.433389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.433430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.433473] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.433514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.433553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.433596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.433636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.433678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.434166] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.499 [2024-07-15 11:37:12.434217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.434267] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.434313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.434362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.434410] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.434456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.434504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.434550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.434597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.434641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.434691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.434744] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.434789] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.434846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.434895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.434944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.434980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435148] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435684] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435729] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.435991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.436031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.436073] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.436113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.436156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.436198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.436239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.436290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.436338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.436385] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.436432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.436480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.436527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.436577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.436627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.436675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.436728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.436774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.436819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.436870] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.436915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.436959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.437007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.437510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.437563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.437610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.437653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.437695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.437742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.437791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.437844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.437893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.437939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.437986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438576] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438822] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.438998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439042] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439405] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439618] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.439998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.440047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.440093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.440135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.440181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.440231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.440713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.440764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.440803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.440854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.440894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.440933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.440974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441422] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441541] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441823] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.441968] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.442014] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.442063] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.442114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.442162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.442206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.500 [2024-07-15 11:37:12.442249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.442290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.442332] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.442377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.442409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.442450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.442491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.442536] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.442575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.442617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.442656] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.442695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.442738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.442783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.442827] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.442878] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.442926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.442972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.443021] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.443069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.443116] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.443163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.443209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.443253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.443300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.443345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.443398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.443451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.443951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.444008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.444055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.444107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.444157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.444206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.444253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.444296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.444341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.444387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.444434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.444487] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.444531] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.444579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.444622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.444667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.444715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.444761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.444806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.444860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.444910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.444957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445296] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445340] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445948] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.445991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.446031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.446072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.446110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.446151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.446198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.446240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.446284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.446326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.446360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.446399] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.446444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.446486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.446526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.446564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.446606] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.446645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.446689] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.446728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.447232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.447273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.447315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.447353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.447395] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.447434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.447474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.447515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.447557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.447598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.447639] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.447670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.447712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.447752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.447782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.447817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.447869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.447900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.447931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.447960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.447990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448122] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448195] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448321] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448643] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448932] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.448984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.449027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.449075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.449127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.449186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.449232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.449278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.449326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.449369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.449418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.449462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.449508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.449553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.449602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.449651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.449700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.449746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.449793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.449848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:44.501 [2024-07-15 11:37:12.450336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.450379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.450418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.450461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.450500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.450548] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.450589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.501 [2024-07-15 11:37:12.450632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.450667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.450711] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.450750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.450791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.450839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.450885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.450928] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.450979] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451019] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.451956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.452006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.452055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.452107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.452151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.452196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.452246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.452291] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.452337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.452381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.452429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.452476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.452527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.452572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.452616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.452663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.452708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.452755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.452806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.452861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.452913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.452964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.453016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.453065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.453113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.453589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.453635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.453676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.453725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.453764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.453808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.453847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.453887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.453927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.453971] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454324] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.454959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455189] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455616] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.455996] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.456037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.456079] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.456121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.456161] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.456203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.456242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.456284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.456808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.456867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.456913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.456961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.457006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.457056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.457102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.457150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.457200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.457243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.457289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.457336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.457386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.457433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.457481] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.457529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.457577] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.457622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.457671] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.457721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.457767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.457814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.457869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.457911] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.457958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.458011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.458064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.458111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.458160] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.458206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.458253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.458299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.458346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.458394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.458435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.502 [2024-07-15 11:37:12.458483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.458530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.458578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.458624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.458668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.458715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.458769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.458814] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.458867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.458915] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.458958] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.459002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.459046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.459088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.459128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.459171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.459212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.459247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.459289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.459327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.459367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.459407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.459457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.459498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.459544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.459584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.459629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.459670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.459712] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.460191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.460227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.460266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.460312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.460351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.460398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.460440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.460475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.460511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.460554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.460600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.460642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.460683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.460723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.460766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.460804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.460855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.460897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.460940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.460986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.461036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.461082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.461137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.461180] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.461227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.461275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.461320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.461364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.461412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.461459] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.461511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.461565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.461619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.461664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.461709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.461756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.461800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.461840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.461882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.461924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.461961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462170] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462500] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462543] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462628] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.462924] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.463392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.463441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.463489] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.463537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.463588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.463638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.463683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.463728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.463774] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.463820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.463867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.463916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.463967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.464017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.464060] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.464114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.464168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.464219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.464275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.464322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.464367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.464414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.464462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.464511] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.464557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.464601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.464653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.464708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.464753] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.464803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.464856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.464918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.464963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465260] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465554] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.465989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.466023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.466069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.466107] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.466147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.466183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.466229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.466276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.466794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.466849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.466897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.466945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.466991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.467048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.467092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.467138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.467187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.467236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.467283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.467327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.503 [2024-07-15 11:37:12.467373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.467423] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.467477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.467526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.467587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.467642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.467692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.467738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.467783] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.467830] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.467882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.467929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.467974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468103] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468417] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468538] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468927] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.468969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.469010] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.469054] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.469097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.469137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.469177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.469219] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.469255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.469301] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.469353] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.469404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.469455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.469506] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.469555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.469604] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.470090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.470141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.470187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.470236] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.470289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.470336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.470384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.470432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.470480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.470529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.470575] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.470623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.470668] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.470714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.470762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.470807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.470857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.470904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.470951] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471698] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471840] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471884] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.471960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.472001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.472041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.472084] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.472133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.472174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.472215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.472257] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.472298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.472346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.472379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.472414] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.472453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.472494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.472537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.472579] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.472619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.472661] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.472703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.472743] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.472786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.472838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.472887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.473390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.473435] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.473480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.473520] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.473558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.473599] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.473645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.473686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.473726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.473770] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.473815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.473854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.473895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.473940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.473981] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.474027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.474069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.474113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.474155] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.474197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.474239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.474281] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.474323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.474366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.474397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.474442] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.474488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.474530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.474589] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.474636] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.474682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.474734] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.474782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.474831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.474883] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.504 [2024-07-15 11:37:12.474929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.474976] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.475020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.475064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.475112] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.475158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.475207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.475253] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.475305] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.475356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.475402] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.475450] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.475497] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.475546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.475593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.475638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.475683] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.475732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.475778] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.475825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.475876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.475919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.475965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.476018] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.476067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.476114] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.476164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.476212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.476254] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.476705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.476751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.476792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.476842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.476887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.476936] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.476977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.477017] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.477055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.477097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.477142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.477185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.477228] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.477270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.477315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.477360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.477408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.477456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.477505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.477551] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.477598] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.477649] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.477700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.477756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.477807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.477859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.477906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.477955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.478002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.478049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.478096] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.478145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.478192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.478241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.478287] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.478337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.478386] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.478436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.478483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.478534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.478587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.478633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.478682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.478727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.478773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.478824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.478877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.478921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.478965] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.479011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.479045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.479087] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.479128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.479173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.479217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.479262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.479302] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.479345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.479390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.479427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.479477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.479512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.479550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.479595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.480129] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.480178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.480223] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.480262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.480303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.480349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.480393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.480447] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.480502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.480559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.480611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.480667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.480715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.480764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.480810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.480862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.480908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.480954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481574] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481885] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.481973] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482012] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482468] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482893] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.482990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.483467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.483522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.483570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.483620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.483665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.483715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.483760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.505 [2024-07-15 11:37:12.483808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.483862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.483909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.483957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484642] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484682] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484727] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.484990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485115] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485377] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485474] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485559] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485596] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485771] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.485987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.486027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.486069] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.486117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.486162] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.486212] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.486259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.486756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.486808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.486860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.486909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.486957] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487496] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487537] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487793] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487841] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487953] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.487999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.488040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.488086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.488128] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.488171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.488203] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.488246] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.488288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.488330] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.488371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.488411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.488452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.488494] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.488535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.488584] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.488624] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.488660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.488709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.488756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.488810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.488862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.488910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.488961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.489008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.489057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.489105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.489151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.489200] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.489250] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.489297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.489349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.489394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.489440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.489486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.489530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.489578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.489622] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.489805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.490158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.490210] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.490259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.490306] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.490352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.490397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.490441] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.490483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.490522] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.490562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.490594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.490634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.490672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.490714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.490764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.490807] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.490865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.490908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.490949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.490991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491070] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491147] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491188] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491270] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491646] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491686] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491726] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491767] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491857] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491897] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.491987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.492032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.492082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.492133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.492179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.492225] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.492273] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.492320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.506 [2024-07-15 11:37:12.492367] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.492415] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.492462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.492508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.492555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.492600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.492645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.492695] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.492748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.492795] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.492838] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.492890] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.493365] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.493413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.493457] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.493490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.493529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.493572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.493614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.493655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.493696] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.493738] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.493780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.493824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.493871] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.493917] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.493963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.494006] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.494055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.494102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.494151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.494196] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.494239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.494289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.494345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.494396] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.494438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.494478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.494519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.494561] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.494594] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.494638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.494680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.494722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.494766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.494817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.494873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.494919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.494964] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.495011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.495056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.495102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.495145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.495191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.495239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.495292] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.495341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.495387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.495434] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.495479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.495525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.495572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.495620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.495665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.495716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.495762] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.495824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.495873] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.495919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.495967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.496011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.496051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.496097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.496139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.496181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.496216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.496672] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.496716] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.496754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.496797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.496843] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.496896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.496941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.496989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.497033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.497083] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.497131] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.497179] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.497227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.497277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.497325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.497369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.497416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.497469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.497518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.497567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.497612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.497662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.497710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.497758] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.497805] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.497854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.497895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.497938] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.497986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498229] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498269] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498312] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498524] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498781] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.498959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.499003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.499053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.499102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.499152] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.499198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.499245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.499293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.499342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.499389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.499437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.499485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.499972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.500016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.500061] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.500104] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.500146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.500193] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:10:44.507 [2024-07-15 11:37:12.500235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.500277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.500319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.500357] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.500397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.507 [2024-07-15 11:37:12.500436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.500478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.500518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.500557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.500597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.500638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.500680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.500724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.500768] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.500813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.500864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.500905] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.500947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.500988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.501028] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.501067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.501101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.501143] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.501186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.501221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.501268] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.501326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.501379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.501424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.501471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.501517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.501562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.501609] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.501655] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.501699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.501746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.501794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.501845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.501891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.501934] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.501983] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.502041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.502089] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.502139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.502186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.502239] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.502290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.502337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.502384] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.502430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.502479] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.502529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.502571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.502612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.502657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.502697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.502745] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.502787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.503251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.503295] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.503336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.503382] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.503426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.503469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.503510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.503550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.503590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.503627] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.503676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.503722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.503769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.503816] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.503876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.503926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.503974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.504036] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.504091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.504136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.504184] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.504235] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.504280] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.504331] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.504379] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.504424] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.504472] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.504521] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.504565] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.504614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.504664] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.504718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.504766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.504817] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.504869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.504916] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.504961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.505009] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.505068] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.505119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.505174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.505217] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.505266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.505314] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.505360] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.505407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.505453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.505499] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.505550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.505597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.505644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.505685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.505728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.505760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.505806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.505856] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.505901] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.505942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.505984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.506031] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.506074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.506117] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.506169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.506660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.506706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.506741] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.506780] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.506819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.506868] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.506910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.506950] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.507000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.507039] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.507077] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.507119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.507156] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.507213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.507262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.507308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.507352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.507400] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.507449] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.507498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.507544] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.507591] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.507640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.507685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.507732] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.507782] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.507829] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.507898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.507949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.507991] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.508023] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.508064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.508102] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.508145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.508182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.508231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.508275] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.508316] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.508355] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.508398] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.508439] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.508477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.508516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.508558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.508600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.508640] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.508678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.508 [2024-07-15 11:37:12.508724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.508760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.508800] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.508850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.508894] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.508937] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.508980] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.509022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.509067] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.509108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.509153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.509199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.509251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.509300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.509351] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.509401] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.509453] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.509941] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.509992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.510037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.510090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.510138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.510185] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.510233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.510277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.510326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.510374] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.510421] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.510469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.510518] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.510571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.510620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.510670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.510722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.510772] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.510815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.510867] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.510914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.510959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511051] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511100] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511139] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511221] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511261] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511303] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511673] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511714] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511755] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511842] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511886] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.511956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.512000] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.512040] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.512082] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.512123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.512164] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.512205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.512252] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.512299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.512347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.512397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.512443] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.512488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.512539] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.512587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.512650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.512699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.512748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.513241] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.513294] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.513342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.513389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.513436] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.513485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.513535] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.513580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.513623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.513667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.513710] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.513754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.513797] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.513846] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.513891] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.513935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.513969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514048] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514138] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514350] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514397] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514477] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514529] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514571] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514775] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514815] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514900] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.514989] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515081] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515127] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515310] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515359] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515408] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515444] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515488] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515703] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515747] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515786] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515831] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515879] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515921] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.515997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.516567] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.516617] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.516659] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.516701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.516740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.516792] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.516845] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.516895] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.516944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.516995] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.517047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.517092] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.517137] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.517182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.517231] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.517277] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.517323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.517370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.517416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.517464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.517510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.517563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.517608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.517654] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.517701] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.509 [2024-07-15 11:37:12.517750] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.517799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.517854] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.517904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.517961] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.518024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.518074] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.518123] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.518177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.518226] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.518276] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.518323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.518371] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.518418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.518467] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.518517] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.518563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.518614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.518662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.518706] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.518754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.518802] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.518853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.518913] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.518959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.519008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.519055] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.519105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.519153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.519202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.519251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.519298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.519346] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.519390] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.519440] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.519483] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.519523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.519566] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520049] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520140] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520216] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520259] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520495] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520545] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520587] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520625] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520663] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520709] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520790] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520882] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520931] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.520967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.521011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.521058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.521101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.521142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.521186] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.521232] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.521282] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.521322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.521363] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.521412] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.521455] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.521502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.521547] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.521592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.521638] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.521690] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.521742] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.521799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.521853] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.521902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.521955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.521999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.522035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.522076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.522118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.522167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.522207] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.522251] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.522297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.522341] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.522378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.522419] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.522462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.522502] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.522546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.522592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.522635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.522677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.522718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.522761] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.522798] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.523293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.523352] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.523409] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.523458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.523504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.523549] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.523595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.523641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.523688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.523737] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.523788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.523839] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.523892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.523943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.523998] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.524047] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.524097] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.524142] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.524191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.524237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.524283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.524328] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.524375] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.524420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.524463] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.524504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.524550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.524593] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.524634] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.524674] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.524707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.524752] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.524791] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.524844] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.524887] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.524929] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.524974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.525016] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.525057] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.525098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.525136] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.525175] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.525215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.525262] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.525300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.525339] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.510 [2024-07-15 11:37:12.525376] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.525418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.525460] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.525503] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.525542] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.525581] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.525620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.525660] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.525700] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.525756] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.525806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.525865] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.525914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.525960] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.526008] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.526062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.526106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.526610] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.526658] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.526705] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.526757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.526801] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.526848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.526892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.526925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.526963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527041] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527091] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527178] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527218] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527308] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527393] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527429] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527465] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527550] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527590] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527633] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527678] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527720] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527804] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527946] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.527992] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.528035] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.528088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.528146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.528197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.528243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.528290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.528342] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.528391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.528438] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.528482] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.528527] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.528572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.528626] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.528670] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.528717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.528766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.528813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.528864] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.528910] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.528954] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.528986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.529027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.529071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.529110] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.529158] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.529201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.529242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.529284] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.529327] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.529373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.529413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.529940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.529986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.530027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.530072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.530120] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.530174] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.530222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.530271] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.530319] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.530369] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.530416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.530461] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.530508] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.530555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.530608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.530665] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.530713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.530759] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.530808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.530859] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.530908] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.530955] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531001] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531093] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531141] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531187] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531238] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531285] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531336] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531476] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531519] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531560] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531595] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531850] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531935] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.531977] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.532020] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.532062] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.532094] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.532133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.532173] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.532220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.532258] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.532298] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.532338] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.532378] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.532420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.532462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.532505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.532546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.532588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.532623] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.532669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.532722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.533266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.533325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.533372] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.533420] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.533471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.533516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.533558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.533597] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.533635] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.533679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.533719] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.533764] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.511 [2024-07-15 11:37:12.533808] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.533862] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.533904] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.533947] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.533987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534029] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534106] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534144] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534182] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534220] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534263] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534304] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534349] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534392] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534432] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534512] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534557] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534650] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534697] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534748] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534848] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534940] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.534988] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.535037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.535086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.535133] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.535177] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.535222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.535266] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.535315] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.535364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.535411] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.535456] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.535505] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.535555] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.535601] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.535647] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.535691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.535740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.535788] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.535836] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.535881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.535919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.535966] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.536007] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.536046] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.536086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.536562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.536608] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.536648] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.536688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.536728] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.536769] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.536811] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.536855] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.536899] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.536943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.536987] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.537026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.537066] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.537109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.537151] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.537201] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.537247] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.537293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.537345] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.537394] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.537437] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.537486] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.537534] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.537580] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.537629] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.537676] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.537723] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.537773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.537820] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.537874] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.537922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.537969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538003] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538086] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538124] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538168] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538206] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538255] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538297] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538343] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538391] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538462] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538501] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538540] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538582] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538725] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538819] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538866] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538914] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.538974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.539022] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.539064] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.539111] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.539150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.539191] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.539233] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.539278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.539320] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.539847] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.539898] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.539945] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.539994] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.540043] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.540090] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.540135] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.540181] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.540227] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.540278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.540323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.540373] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.540418] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.540466] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.540510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.540573] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.540621] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.540667] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.540715] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.512 [2024-07-15 11:37:12.540760] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.540806] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 true 00:10:44.513 [2024-07-15 11:37:12.540860] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.540907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.540952] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541002] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541050] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541197] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541244] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541290] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541380] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541552] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541675] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541713] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541751] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541799] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541851] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541944] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.541990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.542032] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.542072] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.542113] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.542145] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.542183] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.542222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.542265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.542313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.542356] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.542404] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.542446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.542490] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.542523] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.542564] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.542605] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.542645] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.542688] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.543222] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.543278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.543323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.543370] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.543427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.543478] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.543525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.543572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.543620] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.543669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.543718] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.543765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.543813] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.543869] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.543918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.543970] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544058] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544108] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544204] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544242] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544278] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544323] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544364] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544413] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544452] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544491] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544530] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544578] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544619] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544653] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544699] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544787] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544824] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544877] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544918] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544959] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.544997] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.545037] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.545076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.545119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.545163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.545202] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.545245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.545286] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.545325] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.545366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.545407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.545451] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.545498] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.545546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.545592] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.545644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.545691] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.545746] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.545794] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.545849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.545896] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.545939] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.545986] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.546033] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.546558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.546614] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.546669] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.546717] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.546765] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.546810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.546872] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.546919] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.546967] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547011] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547059] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547105] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547150] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547198] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547243] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547289] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547335] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547388] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547433] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547480] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547526] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547612] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547652] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547692] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547733] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547773] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547821] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547902] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547942] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.547984] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.548026] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.548076] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.548118] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.548157] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.548205] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.548245] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.548293] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.548326] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.548366] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.548406] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.548446] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.548485] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.548525] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.548570] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.548613] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.548662] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.548708] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.548740] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.513 [2024-07-15 11:37:12.548784] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.548828] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.548876] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.548922] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.548963] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.549005] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.549045] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.549088] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.549130] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.549169] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.549215] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.549264] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.549311] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.549358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.549858] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.549907] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.549943] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.549990] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.550027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.550065] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.550109] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.550153] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.550199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.550240] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.550279] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.550322] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.550362] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.550416] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.550458] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.550504] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.550546] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.550588] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.550632] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.550677] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.550724] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.550779] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.550826] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.550881] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.550925] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.550974] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.551024] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.551071] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.551119] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.551163] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.551209] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.551248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.551300] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.551347] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.551389] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.551427] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.551469] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.551514] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.551553] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.551585] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.551631] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.551679] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.551731] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.551777] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.551825] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.551875] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.551926] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.551972] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.552027] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.552075] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.552121] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.552167] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.552213] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.552265] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.552313] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.552358] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.552407] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.552464] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.552510] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.552556] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.552603] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.552651] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.552702] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.553199] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 Message suppressed 999 times: [2024-07-15 11:37:12.553249] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 Read completed with error (sct=0, sc=15) 00:10:44.514 [2024-07-15 11:37:12.553299] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.553344] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.553387] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.553426] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.553471] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.553513] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.553558] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.553600] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.553644] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.553680] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.553721] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.553757] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.553803] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.553849] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.553892] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.553933] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.553969] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.554015] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.554056] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.554098] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.554132] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.554171] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.554211] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.554248] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.554288] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.554337] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.554381] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.554428] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.554470] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.554516] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.554563] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.554611] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.554657] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.554707] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.554754] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.554809] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.554861] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.554909] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.554956] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555004] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555053] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555101] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555146] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555192] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555237] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555283] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555334] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555383] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555430] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555475] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555515] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555562] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555602] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555641] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555685] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555722] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555766] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555810] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555863] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555906] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555949] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.555999] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 [2024-07-15 11:37:12.556572] ctrlr_bdev.c: 309:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:10:44.514 11:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:44.514 11:37:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.890 11:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.890 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:45.890 11:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:45.890 11:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:45.890 true 00:10:45.890 11:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:45.890 11:37:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.827 11:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.085 11:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:47.085 11:37:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:47.085 true 00:10:47.086 11:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:47.086 11:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.344 11:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.603 11:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:47.603 11:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:47.861 true 00:10:47.861 11:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:47.861 11:37:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:48.797 11:37:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.797 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:49.056 11:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:49.056 11:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:49.314 true 00:10:49.314 11:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:49.314 11:37:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.251 11:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.251 11:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:50.251 11:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:50.510 true 00:10:50.510 11:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:50.510 11:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.768 11:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.768 11:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:50.768 11:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:51.025 true 00:10:51.025 11:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:51.025 11:37:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.404 11:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.404 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:52.404 11:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:52.404 11:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:52.404 true 00:10:52.404 11:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:52.404 11:37:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:53.341 11:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.600 11:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:53.600 11:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:53.600 true 00:10:53.600 11:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:53.600 11:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.859 11:37:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.118 11:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:54.118 11:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:54.118 true 00:10:54.118 11:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:54.118 11:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.376 11:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.635 11:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:54.635 11:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:54.635 true 00:10:54.635 11:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:54.635 11:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.894 11:37:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.152 11:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:55.152 11:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:55.410 true 00:10:55.410 11:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:55.410 11:37:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:56.344 11:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.344 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:56.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:56.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:56.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:56.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:56.603 11:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:56.603 11:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:56.862 true 00:10:56.862 11:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:56.862 11:37:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.798 11:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.798 11:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:57.798 11:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:58.058 true 00:10:58.058 11:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:58.058 11:37:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.058 11:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.354 11:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:58.354 11:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:58.613 true 00:10:58.613 11:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:10:58.613 11:37:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.001 11:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.001 11:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:00.001 11:37:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:00.001 true 00:11:00.260 11:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:11:00.260 11:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.193 11:37:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.193 11:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:11:01.193 11:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:11:01.193 true 00:11:01.451 11:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:11:01.451 11:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.451 11:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.451 Initializing NVMe Controllers 00:11:01.451 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:01.451 Controller IO queue size 128, less than required. 00:11:01.451 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:01.451 Controller IO queue size 128, less than required. 00:11:01.451 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:01.451 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:01.451 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:01.451 Initialization complete. Launching workers. 00:11:01.451 ======================================================== 00:11:01.451 Latency(us) 00:11:01.451 Device Information : IOPS MiB/s Average min max 00:11:01.451 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2630.28 1.28 33161.48 1846.15 1106180.57 00:11:01.451 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18236.04 8.90 7018.89 2020.29 288151.72 00:11:01.451 ======================================================== 00:11:01.451 Total : 20866.32 10.19 10314.26 1846.15 1106180.57 00:11:01.451 00:11:01.709 11:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:11:01.709 11:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:11:01.966 true 00:11:01.967 11:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1855574 00:11:01.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1855574) - No such process 00:11:01.967 11:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1855574 00:11:01.967 11:37:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.967 11:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:02.225 11:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:02.225 11:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:02.225 11:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:02.225 11:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:02.225 11:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:02.482 null0 00:11:02.483 11:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:02.483 11:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:02.483 11:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:02.483 null1 00:11:02.483 11:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:02.483 11:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:02.483 11:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:02.741 null2 00:11:02.741 11:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:02.741 11:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:02.741 11:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:02.998 null3 00:11:02.998 11:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:02.998 11:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:02.998 11:37:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:02.998 null4 00:11:03.257 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:03.257 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:03.257 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:03.257 null5 00:11:03.257 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:03.257 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:03.257 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:03.515 null6 00:11:03.515 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:03.515 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:03.515 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:03.775 null7 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1861202 1861203 1861205 1861206 1861208 1861210 1861212 1861214 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:03.775 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:04.035 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:04.035 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:04.035 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:04.035 11:37:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.035 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:04.294 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:04.294 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:04.294 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:04.294 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:04.294 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:04.294 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:04.294 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.294 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:04.554 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:04.812 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.812 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.812 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:04.812 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.812 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.812 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:04.813 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.813 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.813 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:04.813 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.813 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.813 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:04.813 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.813 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.813 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:04.813 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.813 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.813 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:04.813 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.813 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.813 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:04.813 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:04.813 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:04.813 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:05.071 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:05.071 11:37:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:05.071 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:05.071 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:05.071 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:05.071 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:05.071 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.071 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:05.071 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:05.071 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:05.071 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:05.330 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:05.589 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:05.847 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:05.847 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:05.847 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:05.847 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:05.847 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:05.847 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:05.847 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.847 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:05.847 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:05.847 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:05.847 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:05.847 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:05.847 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:06.105 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.105 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.105 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:06.105 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.105 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.105 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:06.105 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.105 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.105 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:06.105 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.105 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.105 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:06.105 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.105 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.105 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:06.105 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.105 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.105 11:37:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:06.105 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:06.105 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:06.105 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:06.105 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:06.105 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:06.105 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:06.105 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.105 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:06.105 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.105 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.105 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:06.364 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.364 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.364 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:06.364 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.364 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.364 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:06.364 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.364 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.364 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:06.364 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.364 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.364 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:06.364 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.364 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.364 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:06.364 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.364 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.364 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:06.364 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.364 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.364 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:06.364 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.622 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:06.880 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:06.880 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:06.880 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:06.880 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:06.880 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.880 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:06.880 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:06.880 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:06.880 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:06.880 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:06.880 11:37:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:07.138 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.138 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.138 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:07.138 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.139 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.139 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:07.139 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.139 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.139 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:07.139 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.139 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.139 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:07.139 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.139 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.139 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:07.139 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.139 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.139 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:07.139 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.139 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.139 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:07.139 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:07.139 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:07.398 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:07.398 rmmod nvme_tcp 00:11:07.656 rmmod nvme_fabrics 00:11:07.656 rmmod nvme_keyring 00:11:07.656 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:07.656 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:11:07.656 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:11:07.656 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1855153 ']' 00:11:07.656 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1855153 00:11:07.656 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1855153 ']' 00:11:07.656 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1855153 00:11:07.656 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:11:07.656 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:07.656 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1855153 00:11:07.656 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:07.656 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:07.656 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1855153' 00:11:07.656 killing process with pid 1855153 00:11:07.656 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1855153 00:11:07.656 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1855153 00:11:07.915 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:07.915 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:07.915 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:07.915 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:07.915 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:07.915 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.915 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:07.915 11:37:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.819 11:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:09.819 00:11:09.819 real 0m48.033s 00:11:09.819 user 3m5.387s 00:11:09.819 sys 0m21.445s 00:11:09.819 11:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:09.819 11:37:37 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:09.819 ************************************ 00:11:09.819 END TEST nvmf_ns_hotplug_stress 00:11:09.819 ************************************ 00:11:09.819 11:37:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:09.819 11:37:37 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:09.819 11:37:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:09.819 11:37:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.819 11:37:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:10.077 ************************************ 00:11:10.077 START TEST nvmf_connect_stress 00:11:10.077 ************************************ 00:11:10.077 11:37:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:10.077 * Looking for test storage... 00:11:10.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:10.077 11:37:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:16.635 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:16.635 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:16.635 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:16.636 Found net devices under 0000:af:00.0: cvl_0_0 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:16.636 Found net devices under 0000:af:00.1: cvl_0_1 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:16.636 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:16.893 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:16.893 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:16.893 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:16.893 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:16.893 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:16.893 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:17.150 11:37:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:17.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:11:17.150 00:11:17.150 --- 10.0.0.2 ping statistics --- 00:11:17.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.150 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:17.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:11:17.150 00:11:17.150 --- 10.0.0.1 ping statistics --- 00:11:17.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.150 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1865821 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1865821 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1865821 ']' 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:17.150 11:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.150 [2024-07-15 11:37:45.119719] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:11:17.150 [2024-07-15 11:37:45.119772] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.150 EAL: No free 2048 kB hugepages reported on node 1 00:11:17.150 [2024-07-15 11:37:45.192272] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:17.407 [2024-07-15 11:37:45.264633] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.407 [2024-07-15 11:37:45.264684] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.407 [2024-07-15 11:37:45.264693] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.407 [2024-07-15 11:37:45.264702] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.407 [2024-07-15 11:37:45.264709] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.407 [2024-07-15 11:37:45.264811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.407 [2024-07-15 11:37:45.265112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.407 [2024-07-15 11:37:45.265115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.972 11:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:17.972 11:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:17.972 11:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:17.972 11:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:17.972 11:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.972 11:37:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.972 11:37:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:17.972 11:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.972 11:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.972 [2024-07-15 11:37:45.973266] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:17.972 11:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.972 11:37:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:17.972 11:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.972 11:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.972 11:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.972 11:37:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.972 11:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.972 11:37:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.972 [2024-07-15 11:37:46.000990] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.972 NULL1 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1866003 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:17.972 EAL: No free 2048 kB hugepages reported on node 1 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:17.972 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:18.230 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:18.230 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:18.230 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:18.230 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:18.230 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:18.230 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:18.230 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:18.230 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:18.230 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:18.230 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:18.230 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:18.230 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:18.230 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:18.230 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:18.231 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:18.231 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:18.231 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:18.231 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:18.231 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:18.231 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:18.231 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:18.231 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.231 11:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.231 11:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.488 11:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.488 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:18.488 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.488 11:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.488 11:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.745 11:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.745 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:18.745 11:37:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.745 11:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.745 11:37:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.003 11:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.003 11:37:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:19.003 11:37:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.003 11:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.003 11:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.568 11:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.568 11:37:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:19.568 11:37:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.568 11:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.568 11:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.826 11:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.826 11:37:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:19.826 11:37:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.826 11:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.826 11:37:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.084 11:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.084 11:37:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:20.084 11:37:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:20.084 11:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.084 11:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.342 11:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.342 11:37:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:20.342 11:37:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:20.342 11:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.342 11:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.600 11:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.600 11:37:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:20.600 11:37:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:20.600 11:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.600 11:37:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.230 11:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.230 11:37:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:21.230 11:37:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.230 11:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.230 11:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.487 11:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.487 11:37:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:21.487 11:37:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.487 11:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.487 11:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.745 11:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.745 11:37:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:21.745 11:37:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.745 11:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.745 11:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.003 11:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.003 11:37:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:22.003 11:37:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:22.003 11:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.003 11:37:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.261 11:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.261 11:37:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:22.261 11:37:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:22.261 11:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.261 11:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.826 11:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.826 11:37:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:22.826 11:37:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:22.826 11:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.826 11:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:23.084 11:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.084 11:37:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:23.084 11:37:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:23.084 11:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.084 11:37:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:23.342 11:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.342 11:37:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:23.342 11:37:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:23.342 11:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.342 11:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:23.600 11:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.600 11:37:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:23.600 11:37:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:23.600 11:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.600 11:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:23.858 11:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.858 11:37:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:23.858 11:37:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:23.858 11:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.858 11:37:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.424 11:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.424 11:37:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:24.424 11:37:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.424 11:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.424 11:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.682 11:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.682 11:37:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:24.682 11:37:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.682 11:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.682 11:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:24.939 11:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:24.939 11:37:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:24.939 11:37:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:24.939 11:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:24.939 11:37:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.208 11:37:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.208 11:37:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:25.208 11:37:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:25.208 11:37:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.208 11:37:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.773 11:37:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.773 11:37:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:25.773 11:37:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:25.773 11:37:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.773 11:37:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.031 11:37:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.031 11:37:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:26.031 11:37:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.031 11:37:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.031 11:37:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.289 11:37:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.289 11:37:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:26.289 11:37:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.289 11:37:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.289 11:37:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.548 11:37:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.548 11:37:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:26.548 11:37:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.548 11:37:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.548 11:37:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.806 11:37:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.806 11:37:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:26.806 11:37:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.806 11:37:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.806 11:37:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.372 11:37:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.372 11:37:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:27.372 11:37:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.372 11:37:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.372 11:37:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.630 11:37:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.631 11:37:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:27.631 11:37:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.631 11:37:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.631 11:37:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.889 11:37:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.889 11:37:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:27.889 11:37:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.889 11:37:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.889 11:37:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.146 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:28.146 11:37:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.146 11:37:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1866003 00:11:28.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1866003) - No such process 00:11:28.146 11:37:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1866003 00:11:28.146 11:37:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:28.146 11:37:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:28.146 11:37:56 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:28.146 11:37:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:28.146 11:37:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:28.146 11:37:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:28.146 11:37:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:28.146 11:37:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:28.146 11:37:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:28.146 rmmod nvme_tcp 00:11:28.146 rmmod nvme_fabrics 00:11:28.146 rmmod nvme_keyring 00:11:28.146 11:37:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:28.146 11:37:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:28.146 11:37:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:28.146 11:37:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1865821 ']' 00:11:28.146 11:37:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1865821 00:11:28.146 11:37:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1865821 ']' 00:11:28.146 11:37:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1865821 00:11:28.146 11:37:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:28.405 11:37:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:28.405 11:37:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1865821 00:11:28.405 11:37:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:28.405 11:37:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:28.405 11:37:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1865821' 00:11:28.405 killing process with pid 1865821 00:11:28.405 11:37:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1865821 00:11:28.405 11:37:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1865821 00:11:28.405 11:37:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:28.405 11:37:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:28.405 11:37:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:28.405 11:37:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:28.405 11:37:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:28.405 11:37:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.405 11:37:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:28.405 11:37:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.940 11:37:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:30.940 00:11:30.940 real 0m20.607s 00:11:30.940 user 0m40.610s 00:11:30.940 sys 0m10.225s 00:11:30.940 11:37:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:30.940 11:37:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.940 ************************************ 00:11:30.940 END TEST nvmf_connect_stress 00:11:30.940 ************************************ 00:11:30.940 11:37:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:30.940 11:37:58 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:30.940 11:37:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:30.940 11:37:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:30.940 11:37:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:30.940 ************************************ 00:11:30.940 START TEST nvmf_fused_ordering 00:11:30.940 ************************************ 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:30.941 * Looking for test storage... 00:11:30.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:30.941 11:37:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:37.509 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.509 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:37.510 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:37.510 Found net devices under 0000:af:00.0: cvl_0_0 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:37.510 Found net devices under 0000:af:00.1: cvl_0_1 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:37.510 11:38:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:37.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:11:37.510 00:11:37.510 --- 10.0.0.2 ping statistics --- 00:11:37.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.510 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:37.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:11:37.510 00:11:37.510 --- 10.0.0.1 ping statistics --- 00:11:37.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.510 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1871383 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1871383 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1871383 ']' 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:37.510 11:38:05 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:37.510 [2024-07-15 11:38:05.398632] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:11:37.510 [2024-07-15 11:38:05.398679] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.510 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.510 [2024-07-15 11:38:05.473051] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.510 [2024-07-15 11:38:05.543966] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.510 [2024-07-15 11:38:05.544005] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.510 [2024-07-15 11:38:05.544014] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.510 [2024-07-15 11:38:05.544024] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.510 [2024-07-15 11:38:05.544031] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.510 [2024-07-15 11:38:05.544054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:38.447 [2024-07-15 11:38:06.243234] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:38.447 [2024-07-15 11:38:06.259405] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:38.447 NULL1 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.447 11:38:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:38.447 [2024-07-15 11:38:06.315477] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:11:38.447 [2024-07-15 11:38:06.315515] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1871656 ] 00:11:38.447 EAL: No free 2048 kB hugepages reported on node 1 00:11:39.014 Attached to nqn.2016-06.io.spdk:cnode1 00:11:39.014 Namespace ID: 1 size: 1GB 00:11:39.014 fused_ordering(0) 00:11:39.014 fused_ordering(1) 00:11:39.014 fused_ordering(2) 00:11:39.014 fused_ordering(3) 00:11:39.014 fused_ordering(4) 00:11:39.014 fused_ordering(5) 00:11:39.014 fused_ordering(6) 00:11:39.014 fused_ordering(7) 00:11:39.014 fused_ordering(8) 00:11:39.014 fused_ordering(9) 00:11:39.014 fused_ordering(10) 00:11:39.014 fused_ordering(11) 00:11:39.014 fused_ordering(12) 00:11:39.014 fused_ordering(13) 00:11:39.014 fused_ordering(14) 00:11:39.014 fused_ordering(15) 00:11:39.014 fused_ordering(16) 00:11:39.014 fused_ordering(17) 00:11:39.014 fused_ordering(18) 00:11:39.014 fused_ordering(19) 00:11:39.014 fused_ordering(20) 00:11:39.014 fused_ordering(21) 00:11:39.014 fused_ordering(22) 00:11:39.014 fused_ordering(23) 00:11:39.014 fused_ordering(24) 00:11:39.014 fused_ordering(25) 00:11:39.014 fused_ordering(26) 00:11:39.014 fused_ordering(27) 00:11:39.014 fused_ordering(28) 00:11:39.014 fused_ordering(29) 00:11:39.014 fused_ordering(30) 00:11:39.014 fused_ordering(31) 00:11:39.015 fused_ordering(32) 00:11:39.015 fused_ordering(33) 00:11:39.015 fused_ordering(34) 00:11:39.015 fused_ordering(35) 00:11:39.015 fused_ordering(36) 00:11:39.015 fused_ordering(37) 00:11:39.015 fused_ordering(38) 00:11:39.015 fused_ordering(39) 00:11:39.015 fused_ordering(40) 00:11:39.015 fused_ordering(41) 00:11:39.015 fused_ordering(42) 00:11:39.015 fused_ordering(43) 00:11:39.015 fused_ordering(44) 00:11:39.015 fused_ordering(45) 00:11:39.015 fused_ordering(46) 00:11:39.015 fused_ordering(47) 00:11:39.015 fused_ordering(48) 00:11:39.015 fused_ordering(49) 00:11:39.015 fused_ordering(50) 00:11:39.015 fused_ordering(51) 00:11:39.015 fused_ordering(52) 00:11:39.015 fused_ordering(53) 00:11:39.015 fused_ordering(54) 00:11:39.015 fused_ordering(55) 00:11:39.015 fused_ordering(56) 00:11:39.015 fused_ordering(57) 00:11:39.015 fused_ordering(58) 00:11:39.015 fused_ordering(59) 00:11:39.015 fused_ordering(60) 00:11:39.015 fused_ordering(61) 00:11:39.015 fused_ordering(62) 00:11:39.015 fused_ordering(63) 00:11:39.015 fused_ordering(64) 00:11:39.015 fused_ordering(65) 00:11:39.015 fused_ordering(66) 00:11:39.015 fused_ordering(67) 00:11:39.015 fused_ordering(68) 00:11:39.015 fused_ordering(69) 00:11:39.015 fused_ordering(70) 00:11:39.015 fused_ordering(71) 00:11:39.015 fused_ordering(72) 00:11:39.015 fused_ordering(73) 00:11:39.015 fused_ordering(74) 00:11:39.015 fused_ordering(75) 00:11:39.015 fused_ordering(76) 00:11:39.015 fused_ordering(77) 00:11:39.015 fused_ordering(78) 00:11:39.015 fused_ordering(79) 00:11:39.015 fused_ordering(80) 00:11:39.015 fused_ordering(81) 00:11:39.015 fused_ordering(82) 00:11:39.015 fused_ordering(83) 00:11:39.015 fused_ordering(84) 00:11:39.015 fused_ordering(85) 00:11:39.015 fused_ordering(86) 00:11:39.015 fused_ordering(87) 00:11:39.015 fused_ordering(88) 00:11:39.015 fused_ordering(89) 00:11:39.015 fused_ordering(90) 00:11:39.015 fused_ordering(91) 00:11:39.015 fused_ordering(92) 00:11:39.015 fused_ordering(93) 00:11:39.015 fused_ordering(94) 00:11:39.015 fused_ordering(95) 00:11:39.015 fused_ordering(96) 00:11:39.015 fused_ordering(97) 00:11:39.015 fused_ordering(98) 00:11:39.015 fused_ordering(99) 00:11:39.015 fused_ordering(100) 00:11:39.015 fused_ordering(101) 00:11:39.015 fused_ordering(102) 00:11:39.015 fused_ordering(103) 00:11:39.015 fused_ordering(104) 00:11:39.015 fused_ordering(105) 00:11:39.015 fused_ordering(106) 00:11:39.015 fused_ordering(107) 00:11:39.015 fused_ordering(108) 00:11:39.015 fused_ordering(109) 00:11:39.015 fused_ordering(110) 00:11:39.015 fused_ordering(111) 00:11:39.015 fused_ordering(112) 00:11:39.015 fused_ordering(113) 00:11:39.015 fused_ordering(114) 00:11:39.015 fused_ordering(115) 00:11:39.015 fused_ordering(116) 00:11:39.015 fused_ordering(117) 00:11:39.015 fused_ordering(118) 00:11:39.015 fused_ordering(119) 00:11:39.015 fused_ordering(120) 00:11:39.015 fused_ordering(121) 00:11:39.015 fused_ordering(122) 00:11:39.015 fused_ordering(123) 00:11:39.015 fused_ordering(124) 00:11:39.015 fused_ordering(125) 00:11:39.015 fused_ordering(126) 00:11:39.015 fused_ordering(127) 00:11:39.015 fused_ordering(128) 00:11:39.015 fused_ordering(129) 00:11:39.015 fused_ordering(130) 00:11:39.015 fused_ordering(131) 00:11:39.015 fused_ordering(132) 00:11:39.015 fused_ordering(133) 00:11:39.015 fused_ordering(134) 00:11:39.015 fused_ordering(135) 00:11:39.015 fused_ordering(136) 00:11:39.015 fused_ordering(137) 00:11:39.015 fused_ordering(138) 00:11:39.015 fused_ordering(139) 00:11:39.015 fused_ordering(140) 00:11:39.015 fused_ordering(141) 00:11:39.015 fused_ordering(142) 00:11:39.015 fused_ordering(143) 00:11:39.015 fused_ordering(144) 00:11:39.015 fused_ordering(145) 00:11:39.015 fused_ordering(146) 00:11:39.015 fused_ordering(147) 00:11:39.015 fused_ordering(148) 00:11:39.015 fused_ordering(149) 00:11:39.015 fused_ordering(150) 00:11:39.015 fused_ordering(151) 00:11:39.015 fused_ordering(152) 00:11:39.015 fused_ordering(153) 00:11:39.015 fused_ordering(154) 00:11:39.015 fused_ordering(155) 00:11:39.015 fused_ordering(156) 00:11:39.015 fused_ordering(157) 00:11:39.015 fused_ordering(158) 00:11:39.015 fused_ordering(159) 00:11:39.015 fused_ordering(160) 00:11:39.015 fused_ordering(161) 00:11:39.015 fused_ordering(162) 00:11:39.015 fused_ordering(163) 00:11:39.015 fused_ordering(164) 00:11:39.015 fused_ordering(165) 00:11:39.015 fused_ordering(166) 00:11:39.015 fused_ordering(167) 00:11:39.015 fused_ordering(168) 00:11:39.015 fused_ordering(169) 00:11:39.015 fused_ordering(170) 00:11:39.015 fused_ordering(171) 00:11:39.015 fused_ordering(172) 00:11:39.015 fused_ordering(173) 00:11:39.015 fused_ordering(174) 00:11:39.015 fused_ordering(175) 00:11:39.015 fused_ordering(176) 00:11:39.015 fused_ordering(177) 00:11:39.015 fused_ordering(178) 00:11:39.015 fused_ordering(179) 00:11:39.015 fused_ordering(180) 00:11:39.015 fused_ordering(181) 00:11:39.015 fused_ordering(182) 00:11:39.015 fused_ordering(183) 00:11:39.015 fused_ordering(184) 00:11:39.015 fused_ordering(185) 00:11:39.015 fused_ordering(186) 00:11:39.015 fused_ordering(187) 00:11:39.015 fused_ordering(188) 00:11:39.015 fused_ordering(189) 00:11:39.015 fused_ordering(190) 00:11:39.015 fused_ordering(191) 00:11:39.015 fused_ordering(192) 00:11:39.015 fused_ordering(193) 00:11:39.015 fused_ordering(194) 00:11:39.015 fused_ordering(195) 00:11:39.015 fused_ordering(196) 00:11:39.015 fused_ordering(197) 00:11:39.015 fused_ordering(198) 00:11:39.015 fused_ordering(199) 00:11:39.015 fused_ordering(200) 00:11:39.015 fused_ordering(201) 00:11:39.015 fused_ordering(202) 00:11:39.015 fused_ordering(203) 00:11:39.015 fused_ordering(204) 00:11:39.015 fused_ordering(205) 00:11:39.272 fused_ordering(206) 00:11:39.272 fused_ordering(207) 00:11:39.272 fused_ordering(208) 00:11:39.272 fused_ordering(209) 00:11:39.272 fused_ordering(210) 00:11:39.272 fused_ordering(211) 00:11:39.272 fused_ordering(212) 00:11:39.272 fused_ordering(213) 00:11:39.272 fused_ordering(214) 00:11:39.272 fused_ordering(215) 00:11:39.272 fused_ordering(216) 00:11:39.272 fused_ordering(217) 00:11:39.272 fused_ordering(218) 00:11:39.272 fused_ordering(219) 00:11:39.272 fused_ordering(220) 00:11:39.272 fused_ordering(221) 00:11:39.272 fused_ordering(222) 00:11:39.272 fused_ordering(223) 00:11:39.272 fused_ordering(224) 00:11:39.272 fused_ordering(225) 00:11:39.272 fused_ordering(226) 00:11:39.272 fused_ordering(227) 00:11:39.272 fused_ordering(228) 00:11:39.272 fused_ordering(229) 00:11:39.272 fused_ordering(230) 00:11:39.272 fused_ordering(231) 00:11:39.272 fused_ordering(232) 00:11:39.272 fused_ordering(233) 00:11:39.272 fused_ordering(234) 00:11:39.272 fused_ordering(235) 00:11:39.272 fused_ordering(236) 00:11:39.272 fused_ordering(237) 00:11:39.272 fused_ordering(238) 00:11:39.272 fused_ordering(239) 00:11:39.272 fused_ordering(240) 00:11:39.272 fused_ordering(241) 00:11:39.272 fused_ordering(242) 00:11:39.272 fused_ordering(243) 00:11:39.272 fused_ordering(244) 00:11:39.272 fused_ordering(245) 00:11:39.272 fused_ordering(246) 00:11:39.272 fused_ordering(247) 00:11:39.272 fused_ordering(248) 00:11:39.272 fused_ordering(249) 00:11:39.272 fused_ordering(250) 00:11:39.272 fused_ordering(251) 00:11:39.272 fused_ordering(252) 00:11:39.272 fused_ordering(253) 00:11:39.272 fused_ordering(254) 00:11:39.272 fused_ordering(255) 00:11:39.272 fused_ordering(256) 00:11:39.272 fused_ordering(257) 00:11:39.272 fused_ordering(258) 00:11:39.272 fused_ordering(259) 00:11:39.272 fused_ordering(260) 00:11:39.272 fused_ordering(261) 00:11:39.272 fused_ordering(262) 00:11:39.272 fused_ordering(263) 00:11:39.272 fused_ordering(264) 00:11:39.272 fused_ordering(265) 00:11:39.272 fused_ordering(266) 00:11:39.272 fused_ordering(267) 00:11:39.272 fused_ordering(268) 00:11:39.272 fused_ordering(269) 00:11:39.272 fused_ordering(270) 00:11:39.272 fused_ordering(271) 00:11:39.272 fused_ordering(272) 00:11:39.272 fused_ordering(273) 00:11:39.272 fused_ordering(274) 00:11:39.272 fused_ordering(275) 00:11:39.272 fused_ordering(276) 00:11:39.272 fused_ordering(277) 00:11:39.272 fused_ordering(278) 00:11:39.272 fused_ordering(279) 00:11:39.272 fused_ordering(280) 00:11:39.272 fused_ordering(281) 00:11:39.272 fused_ordering(282) 00:11:39.272 fused_ordering(283) 00:11:39.272 fused_ordering(284) 00:11:39.272 fused_ordering(285) 00:11:39.272 fused_ordering(286) 00:11:39.272 fused_ordering(287) 00:11:39.273 fused_ordering(288) 00:11:39.273 fused_ordering(289) 00:11:39.273 fused_ordering(290) 00:11:39.273 fused_ordering(291) 00:11:39.273 fused_ordering(292) 00:11:39.273 fused_ordering(293) 00:11:39.273 fused_ordering(294) 00:11:39.273 fused_ordering(295) 00:11:39.273 fused_ordering(296) 00:11:39.273 fused_ordering(297) 00:11:39.273 fused_ordering(298) 00:11:39.273 fused_ordering(299) 00:11:39.273 fused_ordering(300) 00:11:39.273 fused_ordering(301) 00:11:39.273 fused_ordering(302) 00:11:39.273 fused_ordering(303) 00:11:39.273 fused_ordering(304) 00:11:39.273 fused_ordering(305) 00:11:39.273 fused_ordering(306) 00:11:39.273 fused_ordering(307) 00:11:39.273 fused_ordering(308) 00:11:39.273 fused_ordering(309) 00:11:39.273 fused_ordering(310) 00:11:39.273 fused_ordering(311) 00:11:39.273 fused_ordering(312) 00:11:39.273 fused_ordering(313) 00:11:39.273 fused_ordering(314) 00:11:39.273 fused_ordering(315) 00:11:39.273 fused_ordering(316) 00:11:39.273 fused_ordering(317) 00:11:39.273 fused_ordering(318) 00:11:39.273 fused_ordering(319) 00:11:39.273 fused_ordering(320) 00:11:39.273 fused_ordering(321) 00:11:39.273 fused_ordering(322) 00:11:39.273 fused_ordering(323) 00:11:39.273 fused_ordering(324) 00:11:39.273 fused_ordering(325) 00:11:39.273 fused_ordering(326) 00:11:39.273 fused_ordering(327) 00:11:39.273 fused_ordering(328) 00:11:39.273 fused_ordering(329) 00:11:39.273 fused_ordering(330) 00:11:39.273 fused_ordering(331) 00:11:39.273 fused_ordering(332) 00:11:39.273 fused_ordering(333) 00:11:39.273 fused_ordering(334) 00:11:39.273 fused_ordering(335) 00:11:39.273 fused_ordering(336) 00:11:39.273 fused_ordering(337) 00:11:39.273 fused_ordering(338) 00:11:39.273 fused_ordering(339) 00:11:39.273 fused_ordering(340) 00:11:39.273 fused_ordering(341) 00:11:39.273 fused_ordering(342) 00:11:39.273 fused_ordering(343) 00:11:39.273 fused_ordering(344) 00:11:39.273 fused_ordering(345) 00:11:39.273 fused_ordering(346) 00:11:39.273 fused_ordering(347) 00:11:39.273 fused_ordering(348) 00:11:39.273 fused_ordering(349) 00:11:39.273 fused_ordering(350) 00:11:39.273 fused_ordering(351) 00:11:39.273 fused_ordering(352) 00:11:39.273 fused_ordering(353) 00:11:39.273 fused_ordering(354) 00:11:39.273 fused_ordering(355) 00:11:39.273 fused_ordering(356) 00:11:39.273 fused_ordering(357) 00:11:39.273 fused_ordering(358) 00:11:39.273 fused_ordering(359) 00:11:39.273 fused_ordering(360) 00:11:39.273 fused_ordering(361) 00:11:39.273 fused_ordering(362) 00:11:39.273 fused_ordering(363) 00:11:39.273 fused_ordering(364) 00:11:39.273 fused_ordering(365) 00:11:39.273 fused_ordering(366) 00:11:39.273 fused_ordering(367) 00:11:39.273 fused_ordering(368) 00:11:39.273 fused_ordering(369) 00:11:39.273 fused_ordering(370) 00:11:39.273 fused_ordering(371) 00:11:39.273 fused_ordering(372) 00:11:39.273 fused_ordering(373) 00:11:39.273 fused_ordering(374) 00:11:39.273 fused_ordering(375) 00:11:39.273 fused_ordering(376) 00:11:39.273 fused_ordering(377) 00:11:39.273 fused_ordering(378) 00:11:39.273 fused_ordering(379) 00:11:39.273 fused_ordering(380) 00:11:39.273 fused_ordering(381) 00:11:39.273 fused_ordering(382) 00:11:39.273 fused_ordering(383) 00:11:39.273 fused_ordering(384) 00:11:39.273 fused_ordering(385) 00:11:39.273 fused_ordering(386) 00:11:39.273 fused_ordering(387) 00:11:39.273 fused_ordering(388) 00:11:39.273 fused_ordering(389) 00:11:39.273 fused_ordering(390) 00:11:39.273 fused_ordering(391) 00:11:39.273 fused_ordering(392) 00:11:39.273 fused_ordering(393) 00:11:39.273 fused_ordering(394) 00:11:39.273 fused_ordering(395) 00:11:39.273 fused_ordering(396) 00:11:39.273 fused_ordering(397) 00:11:39.273 fused_ordering(398) 00:11:39.273 fused_ordering(399) 00:11:39.273 fused_ordering(400) 00:11:39.273 fused_ordering(401) 00:11:39.273 fused_ordering(402) 00:11:39.273 fused_ordering(403) 00:11:39.273 fused_ordering(404) 00:11:39.273 fused_ordering(405) 00:11:39.273 fused_ordering(406) 00:11:39.273 fused_ordering(407) 00:11:39.273 fused_ordering(408) 00:11:39.273 fused_ordering(409) 00:11:39.273 fused_ordering(410) 00:11:39.838 fused_ordering(411) 00:11:39.838 fused_ordering(412) 00:11:39.838 fused_ordering(413) 00:11:39.838 fused_ordering(414) 00:11:39.838 fused_ordering(415) 00:11:39.838 fused_ordering(416) 00:11:39.838 fused_ordering(417) 00:11:39.838 fused_ordering(418) 00:11:39.838 fused_ordering(419) 00:11:39.838 fused_ordering(420) 00:11:39.838 fused_ordering(421) 00:11:39.838 fused_ordering(422) 00:11:39.838 fused_ordering(423) 00:11:39.839 fused_ordering(424) 00:11:39.839 fused_ordering(425) 00:11:39.839 fused_ordering(426) 00:11:39.839 fused_ordering(427) 00:11:39.839 fused_ordering(428) 00:11:39.839 fused_ordering(429) 00:11:39.839 fused_ordering(430) 00:11:39.839 fused_ordering(431) 00:11:39.839 fused_ordering(432) 00:11:39.839 fused_ordering(433) 00:11:39.839 fused_ordering(434) 00:11:39.839 fused_ordering(435) 00:11:39.839 fused_ordering(436) 00:11:39.839 fused_ordering(437) 00:11:39.839 fused_ordering(438) 00:11:39.839 fused_ordering(439) 00:11:39.839 fused_ordering(440) 00:11:39.839 fused_ordering(441) 00:11:39.839 fused_ordering(442) 00:11:39.839 fused_ordering(443) 00:11:39.839 fused_ordering(444) 00:11:39.839 fused_ordering(445) 00:11:39.839 fused_ordering(446) 00:11:39.839 fused_ordering(447) 00:11:39.839 fused_ordering(448) 00:11:39.839 fused_ordering(449) 00:11:39.839 fused_ordering(450) 00:11:39.839 fused_ordering(451) 00:11:39.839 fused_ordering(452) 00:11:39.839 fused_ordering(453) 00:11:39.839 fused_ordering(454) 00:11:39.839 fused_ordering(455) 00:11:39.839 fused_ordering(456) 00:11:39.839 fused_ordering(457) 00:11:39.839 fused_ordering(458) 00:11:39.839 fused_ordering(459) 00:11:39.839 fused_ordering(460) 00:11:39.839 fused_ordering(461) 00:11:39.839 fused_ordering(462) 00:11:39.839 fused_ordering(463) 00:11:39.839 fused_ordering(464) 00:11:39.839 fused_ordering(465) 00:11:39.839 fused_ordering(466) 00:11:39.839 fused_ordering(467) 00:11:39.839 fused_ordering(468) 00:11:39.839 fused_ordering(469) 00:11:39.839 fused_ordering(470) 00:11:39.839 fused_ordering(471) 00:11:39.839 fused_ordering(472) 00:11:39.839 fused_ordering(473) 00:11:39.839 fused_ordering(474) 00:11:39.839 fused_ordering(475) 00:11:39.839 fused_ordering(476) 00:11:39.839 fused_ordering(477) 00:11:39.839 fused_ordering(478) 00:11:39.839 fused_ordering(479) 00:11:39.839 fused_ordering(480) 00:11:39.839 fused_ordering(481) 00:11:39.839 fused_ordering(482) 00:11:39.839 fused_ordering(483) 00:11:39.839 fused_ordering(484) 00:11:39.839 fused_ordering(485) 00:11:39.839 fused_ordering(486) 00:11:39.839 fused_ordering(487) 00:11:39.839 fused_ordering(488) 00:11:39.839 fused_ordering(489) 00:11:39.839 fused_ordering(490) 00:11:39.839 fused_ordering(491) 00:11:39.839 fused_ordering(492) 00:11:39.839 fused_ordering(493) 00:11:39.839 fused_ordering(494) 00:11:39.839 fused_ordering(495) 00:11:39.839 fused_ordering(496) 00:11:39.839 fused_ordering(497) 00:11:39.839 fused_ordering(498) 00:11:39.839 fused_ordering(499) 00:11:39.839 fused_ordering(500) 00:11:39.839 fused_ordering(501) 00:11:39.839 fused_ordering(502) 00:11:39.839 fused_ordering(503) 00:11:39.839 fused_ordering(504) 00:11:39.839 fused_ordering(505) 00:11:39.839 fused_ordering(506) 00:11:39.839 fused_ordering(507) 00:11:39.839 fused_ordering(508) 00:11:39.839 fused_ordering(509) 00:11:39.839 fused_ordering(510) 00:11:39.839 fused_ordering(511) 00:11:39.839 fused_ordering(512) 00:11:39.839 fused_ordering(513) 00:11:39.839 fused_ordering(514) 00:11:39.839 fused_ordering(515) 00:11:39.839 fused_ordering(516) 00:11:39.839 fused_ordering(517) 00:11:39.839 fused_ordering(518) 00:11:39.839 fused_ordering(519) 00:11:39.839 fused_ordering(520) 00:11:39.839 fused_ordering(521) 00:11:39.839 fused_ordering(522) 00:11:39.839 fused_ordering(523) 00:11:39.839 fused_ordering(524) 00:11:39.839 fused_ordering(525) 00:11:39.839 fused_ordering(526) 00:11:39.839 fused_ordering(527) 00:11:39.839 fused_ordering(528) 00:11:39.839 fused_ordering(529) 00:11:39.839 fused_ordering(530) 00:11:39.839 fused_ordering(531) 00:11:39.839 fused_ordering(532) 00:11:39.839 fused_ordering(533) 00:11:39.839 fused_ordering(534) 00:11:39.839 fused_ordering(535) 00:11:39.839 fused_ordering(536) 00:11:39.839 fused_ordering(537) 00:11:39.839 fused_ordering(538) 00:11:39.839 fused_ordering(539) 00:11:39.839 fused_ordering(540) 00:11:39.839 fused_ordering(541) 00:11:39.839 fused_ordering(542) 00:11:39.839 fused_ordering(543) 00:11:39.839 fused_ordering(544) 00:11:39.839 fused_ordering(545) 00:11:39.839 fused_ordering(546) 00:11:39.839 fused_ordering(547) 00:11:39.839 fused_ordering(548) 00:11:39.839 fused_ordering(549) 00:11:39.839 fused_ordering(550) 00:11:39.839 fused_ordering(551) 00:11:39.839 fused_ordering(552) 00:11:39.839 fused_ordering(553) 00:11:39.839 fused_ordering(554) 00:11:39.839 fused_ordering(555) 00:11:39.839 fused_ordering(556) 00:11:39.839 fused_ordering(557) 00:11:39.839 fused_ordering(558) 00:11:39.839 fused_ordering(559) 00:11:39.839 fused_ordering(560) 00:11:39.839 fused_ordering(561) 00:11:39.839 fused_ordering(562) 00:11:39.839 fused_ordering(563) 00:11:39.839 fused_ordering(564) 00:11:39.839 fused_ordering(565) 00:11:39.839 fused_ordering(566) 00:11:39.839 fused_ordering(567) 00:11:39.839 fused_ordering(568) 00:11:39.839 fused_ordering(569) 00:11:39.839 fused_ordering(570) 00:11:39.839 fused_ordering(571) 00:11:39.839 fused_ordering(572) 00:11:39.839 fused_ordering(573) 00:11:39.839 fused_ordering(574) 00:11:39.839 fused_ordering(575) 00:11:39.839 fused_ordering(576) 00:11:39.839 fused_ordering(577) 00:11:39.839 fused_ordering(578) 00:11:39.839 fused_ordering(579) 00:11:39.839 fused_ordering(580) 00:11:39.839 fused_ordering(581) 00:11:39.839 fused_ordering(582) 00:11:39.839 fused_ordering(583) 00:11:39.839 fused_ordering(584) 00:11:39.839 fused_ordering(585) 00:11:39.839 fused_ordering(586) 00:11:39.839 fused_ordering(587) 00:11:39.839 fused_ordering(588) 00:11:39.839 fused_ordering(589) 00:11:39.839 fused_ordering(590) 00:11:39.839 fused_ordering(591) 00:11:39.839 fused_ordering(592) 00:11:39.839 fused_ordering(593) 00:11:39.839 fused_ordering(594) 00:11:39.839 fused_ordering(595) 00:11:39.839 fused_ordering(596) 00:11:39.839 fused_ordering(597) 00:11:39.839 fused_ordering(598) 00:11:39.839 fused_ordering(599) 00:11:39.839 fused_ordering(600) 00:11:39.839 fused_ordering(601) 00:11:39.839 fused_ordering(602) 00:11:39.839 fused_ordering(603) 00:11:39.839 fused_ordering(604) 00:11:39.839 fused_ordering(605) 00:11:39.839 fused_ordering(606) 00:11:39.839 fused_ordering(607) 00:11:39.839 fused_ordering(608) 00:11:39.839 fused_ordering(609) 00:11:39.839 fused_ordering(610) 00:11:39.839 fused_ordering(611) 00:11:39.839 fused_ordering(612) 00:11:39.839 fused_ordering(613) 00:11:39.839 fused_ordering(614) 00:11:39.839 fused_ordering(615) 00:11:40.406 fused_ordering(616) 00:11:40.406 fused_ordering(617) 00:11:40.406 fused_ordering(618) 00:11:40.406 fused_ordering(619) 00:11:40.406 fused_ordering(620) 00:11:40.406 fused_ordering(621) 00:11:40.406 fused_ordering(622) 00:11:40.406 fused_ordering(623) 00:11:40.406 fused_ordering(624) 00:11:40.406 fused_ordering(625) 00:11:40.406 fused_ordering(626) 00:11:40.406 fused_ordering(627) 00:11:40.407 fused_ordering(628) 00:11:40.407 fused_ordering(629) 00:11:40.407 fused_ordering(630) 00:11:40.407 fused_ordering(631) 00:11:40.407 fused_ordering(632) 00:11:40.407 fused_ordering(633) 00:11:40.407 fused_ordering(634) 00:11:40.407 fused_ordering(635) 00:11:40.407 fused_ordering(636) 00:11:40.407 fused_ordering(637) 00:11:40.407 fused_ordering(638) 00:11:40.407 fused_ordering(639) 00:11:40.407 fused_ordering(640) 00:11:40.407 fused_ordering(641) 00:11:40.407 fused_ordering(642) 00:11:40.407 fused_ordering(643) 00:11:40.407 fused_ordering(644) 00:11:40.407 fused_ordering(645) 00:11:40.407 fused_ordering(646) 00:11:40.407 fused_ordering(647) 00:11:40.407 fused_ordering(648) 00:11:40.407 fused_ordering(649) 00:11:40.407 fused_ordering(650) 00:11:40.407 fused_ordering(651) 00:11:40.407 fused_ordering(652) 00:11:40.407 fused_ordering(653) 00:11:40.407 fused_ordering(654) 00:11:40.407 fused_ordering(655) 00:11:40.407 fused_ordering(656) 00:11:40.407 fused_ordering(657) 00:11:40.407 fused_ordering(658) 00:11:40.407 fused_ordering(659) 00:11:40.407 fused_ordering(660) 00:11:40.407 fused_ordering(661) 00:11:40.407 fused_ordering(662) 00:11:40.407 fused_ordering(663) 00:11:40.407 fused_ordering(664) 00:11:40.407 fused_ordering(665) 00:11:40.407 fused_ordering(666) 00:11:40.407 fused_ordering(667) 00:11:40.407 fused_ordering(668) 00:11:40.407 fused_ordering(669) 00:11:40.407 fused_ordering(670) 00:11:40.407 fused_ordering(671) 00:11:40.407 fused_ordering(672) 00:11:40.407 fused_ordering(673) 00:11:40.407 fused_ordering(674) 00:11:40.407 fused_ordering(675) 00:11:40.407 fused_ordering(676) 00:11:40.407 fused_ordering(677) 00:11:40.407 fused_ordering(678) 00:11:40.407 fused_ordering(679) 00:11:40.407 fused_ordering(680) 00:11:40.407 fused_ordering(681) 00:11:40.407 fused_ordering(682) 00:11:40.407 fused_ordering(683) 00:11:40.407 fused_ordering(684) 00:11:40.407 fused_ordering(685) 00:11:40.407 fused_ordering(686) 00:11:40.407 fused_ordering(687) 00:11:40.407 fused_ordering(688) 00:11:40.407 fused_ordering(689) 00:11:40.407 fused_ordering(690) 00:11:40.407 fused_ordering(691) 00:11:40.407 fused_ordering(692) 00:11:40.407 fused_ordering(693) 00:11:40.407 fused_ordering(694) 00:11:40.407 fused_ordering(695) 00:11:40.407 fused_ordering(696) 00:11:40.407 fused_ordering(697) 00:11:40.407 fused_ordering(698) 00:11:40.407 fused_ordering(699) 00:11:40.407 fused_ordering(700) 00:11:40.407 fused_ordering(701) 00:11:40.407 fused_ordering(702) 00:11:40.407 fused_ordering(703) 00:11:40.407 fused_ordering(704) 00:11:40.407 fused_ordering(705) 00:11:40.407 fused_ordering(706) 00:11:40.407 fused_ordering(707) 00:11:40.407 fused_ordering(708) 00:11:40.407 fused_ordering(709) 00:11:40.407 fused_ordering(710) 00:11:40.407 fused_ordering(711) 00:11:40.407 fused_ordering(712) 00:11:40.407 fused_ordering(713) 00:11:40.407 fused_ordering(714) 00:11:40.407 fused_ordering(715) 00:11:40.407 fused_ordering(716) 00:11:40.407 fused_ordering(717) 00:11:40.407 fused_ordering(718) 00:11:40.407 fused_ordering(719) 00:11:40.407 fused_ordering(720) 00:11:40.407 fused_ordering(721) 00:11:40.407 fused_ordering(722) 00:11:40.407 fused_ordering(723) 00:11:40.407 fused_ordering(724) 00:11:40.407 fused_ordering(725) 00:11:40.407 fused_ordering(726) 00:11:40.407 fused_ordering(727) 00:11:40.407 fused_ordering(728) 00:11:40.407 fused_ordering(729) 00:11:40.407 fused_ordering(730) 00:11:40.407 fused_ordering(731) 00:11:40.407 fused_ordering(732) 00:11:40.407 fused_ordering(733) 00:11:40.407 fused_ordering(734) 00:11:40.407 fused_ordering(735) 00:11:40.407 fused_ordering(736) 00:11:40.407 fused_ordering(737) 00:11:40.407 fused_ordering(738) 00:11:40.407 fused_ordering(739) 00:11:40.407 fused_ordering(740) 00:11:40.407 fused_ordering(741) 00:11:40.407 fused_ordering(742) 00:11:40.407 fused_ordering(743) 00:11:40.407 fused_ordering(744) 00:11:40.407 fused_ordering(745) 00:11:40.407 fused_ordering(746) 00:11:40.407 fused_ordering(747) 00:11:40.407 fused_ordering(748) 00:11:40.407 fused_ordering(749) 00:11:40.407 fused_ordering(750) 00:11:40.407 fused_ordering(751) 00:11:40.407 fused_ordering(752) 00:11:40.407 fused_ordering(753) 00:11:40.407 fused_ordering(754) 00:11:40.407 fused_ordering(755) 00:11:40.407 fused_ordering(756) 00:11:40.407 fused_ordering(757) 00:11:40.407 fused_ordering(758) 00:11:40.407 fused_ordering(759) 00:11:40.407 fused_ordering(760) 00:11:40.407 fused_ordering(761) 00:11:40.407 fused_ordering(762) 00:11:40.407 fused_ordering(763) 00:11:40.407 fused_ordering(764) 00:11:40.407 fused_ordering(765) 00:11:40.407 fused_ordering(766) 00:11:40.407 fused_ordering(767) 00:11:40.407 fused_ordering(768) 00:11:40.407 fused_ordering(769) 00:11:40.407 fused_ordering(770) 00:11:40.407 fused_ordering(771) 00:11:40.407 fused_ordering(772) 00:11:40.407 fused_ordering(773) 00:11:40.407 fused_ordering(774) 00:11:40.407 fused_ordering(775) 00:11:40.407 fused_ordering(776) 00:11:40.407 fused_ordering(777) 00:11:40.407 fused_ordering(778) 00:11:40.407 fused_ordering(779) 00:11:40.407 fused_ordering(780) 00:11:40.407 fused_ordering(781) 00:11:40.407 fused_ordering(782) 00:11:40.407 fused_ordering(783) 00:11:40.407 fused_ordering(784) 00:11:40.407 fused_ordering(785) 00:11:40.407 fused_ordering(786) 00:11:40.407 fused_ordering(787) 00:11:40.407 fused_ordering(788) 00:11:40.407 fused_ordering(789) 00:11:40.407 fused_ordering(790) 00:11:40.407 fused_ordering(791) 00:11:40.407 fused_ordering(792) 00:11:40.407 fused_ordering(793) 00:11:40.407 fused_ordering(794) 00:11:40.407 fused_ordering(795) 00:11:40.407 fused_ordering(796) 00:11:40.407 fused_ordering(797) 00:11:40.407 fused_ordering(798) 00:11:40.407 fused_ordering(799) 00:11:40.407 fused_ordering(800) 00:11:40.407 fused_ordering(801) 00:11:40.407 fused_ordering(802) 00:11:40.407 fused_ordering(803) 00:11:40.407 fused_ordering(804) 00:11:40.407 fused_ordering(805) 00:11:40.407 fused_ordering(806) 00:11:40.407 fused_ordering(807) 00:11:40.407 fused_ordering(808) 00:11:40.407 fused_ordering(809) 00:11:40.407 fused_ordering(810) 00:11:40.407 fused_ordering(811) 00:11:40.407 fused_ordering(812) 00:11:40.407 fused_ordering(813) 00:11:40.407 fused_ordering(814) 00:11:40.407 fused_ordering(815) 00:11:40.407 fused_ordering(816) 00:11:40.407 fused_ordering(817) 00:11:40.407 fused_ordering(818) 00:11:40.407 fused_ordering(819) 00:11:40.407 fused_ordering(820) 00:11:40.974 fused_ordering(821) 00:11:40.974 fused_ordering(822) 00:11:40.974 fused_ordering(823) 00:11:40.974 fused_ordering(824) 00:11:40.974 fused_ordering(825) 00:11:40.974 fused_ordering(826) 00:11:40.974 fused_ordering(827) 00:11:40.974 fused_ordering(828) 00:11:40.974 fused_ordering(829) 00:11:40.974 fused_ordering(830) 00:11:40.974 fused_ordering(831) 00:11:40.974 fused_ordering(832) 00:11:40.974 fused_ordering(833) 00:11:40.974 fused_ordering(834) 00:11:40.974 fused_ordering(835) 00:11:40.974 fused_ordering(836) 00:11:40.974 fused_ordering(837) 00:11:40.974 fused_ordering(838) 00:11:40.974 fused_ordering(839) 00:11:40.974 fused_ordering(840) 00:11:40.974 fused_ordering(841) 00:11:40.974 fused_ordering(842) 00:11:40.974 fused_ordering(843) 00:11:40.974 fused_ordering(844) 00:11:40.974 fused_ordering(845) 00:11:40.974 fused_ordering(846) 00:11:40.974 fused_ordering(847) 00:11:40.974 fused_ordering(848) 00:11:40.974 fused_ordering(849) 00:11:40.974 fused_ordering(850) 00:11:40.974 fused_ordering(851) 00:11:40.974 fused_ordering(852) 00:11:40.974 fused_ordering(853) 00:11:40.974 fused_ordering(854) 00:11:40.974 fused_ordering(855) 00:11:40.974 fused_ordering(856) 00:11:40.974 fused_ordering(857) 00:11:40.974 fused_ordering(858) 00:11:40.974 fused_ordering(859) 00:11:40.974 fused_ordering(860) 00:11:40.974 fused_ordering(861) 00:11:40.974 fused_ordering(862) 00:11:40.974 fused_ordering(863) 00:11:40.974 fused_ordering(864) 00:11:40.974 fused_ordering(865) 00:11:40.974 fused_ordering(866) 00:11:40.974 fused_ordering(867) 00:11:40.974 fused_ordering(868) 00:11:40.974 fused_ordering(869) 00:11:40.974 fused_ordering(870) 00:11:40.974 fused_ordering(871) 00:11:40.974 fused_ordering(872) 00:11:40.974 fused_ordering(873) 00:11:40.974 fused_ordering(874) 00:11:40.974 fused_ordering(875) 00:11:40.974 fused_ordering(876) 00:11:40.974 fused_ordering(877) 00:11:40.974 fused_ordering(878) 00:11:40.974 fused_ordering(879) 00:11:40.974 fused_ordering(880) 00:11:40.974 fused_ordering(881) 00:11:40.974 fused_ordering(882) 00:11:40.974 fused_ordering(883) 00:11:40.974 fused_ordering(884) 00:11:40.974 fused_ordering(885) 00:11:40.974 fused_ordering(886) 00:11:40.974 fused_ordering(887) 00:11:40.974 fused_ordering(888) 00:11:40.974 fused_ordering(889) 00:11:40.974 fused_ordering(890) 00:11:40.974 fused_ordering(891) 00:11:40.974 fused_ordering(892) 00:11:40.974 fused_ordering(893) 00:11:40.974 fused_ordering(894) 00:11:40.974 fused_ordering(895) 00:11:40.974 fused_ordering(896) 00:11:40.974 fused_ordering(897) 00:11:40.974 fused_ordering(898) 00:11:40.974 fused_ordering(899) 00:11:40.974 fused_ordering(900) 00:11:40.974 fused_ordering(901) 00:11:40.974 fused_ordering(902) 00:11:40.974 fused_ordering(903) 00:11:40.974 fused_ordering(904) 00:11:40.974 fused_ordering(905) 00:11:40.974 fused_ordering(906) 00:11:40.974 fused_ordering(907) 00:11:40.974 fused_ordering(908) 00:11:40.974 fused_ordering(909) 00:11:40.974 fused_ordering(910) 00:11:40.974 fused_ordering(911) 00:11:40.974 fused_ordering(912) 00:11:40.974 fused_ordering(913) 00:11:40.974 fused_ordering(914) 00:11:40.974 fused_ordering(915) 00:11:40.974 fused_ordering(916) 00:11:40.974 fused_ordering(917) 00:11:40.974 fused_ordering(918) 00:11:40.974 fused_ordering(919) 00:11:40.974 fused_ordering(920) 00:11:40.974 fused_ordering(921) 00:11:40.974 fused_ordering(922) 00:11:40.974 fused_ordering(923) 00:11:40.974 fused_ordering(924) 00:11:40.974 fused_ordering(925) 00:11:40.974 fused_ordering(926) 00:11:40.974 fused_ordering(927) 00:11:40.974 fused_ordering(928) 00:11:40.974 fused_ordering(929) 00:11:40.974 fused_ordering(930) 00:11:40.974 fused_ordering(931) 00:11:40.974 fused_ordering(932) 00:11:40.974 fused_ordering(933) 00:11:40.974 fused_ordering(934) 00:11:40.974 fused_ordering(935) 00:11:40.974 fused_ordering(936) 00:11:40.974 fused_ordering(937) 00:11:40.974 fused_ordering(938) 00:11:40.974 fused_ordering(939) 00:11:40.974 fused_ordering(940) 00:11:40.974 fused_ordering(941) 00:11:40.974 fused_ordering(942) 00:11:40.974 fused_ordering(943) 00:11:40.974 fused_ordering(944) 00:11:40.974 fused_ordering(945) 00:11:40.974 fused_ordering(946) 00:11:40.974 fused_ordering(947) 00:11:40.974 fused_ordering(948) 00:11:40.974 fused_ordering(949) 00:11:40.974 fused_ordering(950) 00:11:40.974 fused_ordering(951) 00:11:40.974 fused_ordering(952) 00:11:40.974 fused_ordering(953) 00:11:40.974 fused_ordering(954) 00:11:40.974 fused_ordering(955) 00:11:40.974 fused_ordering(956) 00:11:40.974 fused_ordering(957) 00:11:40.974 fused_ordering(958) 00:11:40.974 fused_ordering(959) 00:11:40.974 fused_ordering(960) 00:11:40.974 fused_ordering(961) 00:11:40.974 fused_ordering(962) 00:11:40.974 fused_ordering(963) 00:11:40.974 fused_ordering(964) 00:11:40.974 fused_ordering(965) 00:11:40.974 fused_ordering(966) 00:11:40.974 fused_ordering(967) 00:11:40.974 fused_ordering(968) 00:11:40.974 fused_ordering(969) 00:11:40.974 fused_ordering(970) 00:11:40.974 fused_ordering(971) 00:11:40.974 fused_ordering(972) 00:11:40.974 fused_ordering(973) 00:11:40.974 fused_ordering(974) 00:11:40.974 fused_ordering(975) 00:11:40.974 fused_ordering(976) 00:11:40.974 fused_ordering(977) 00:11:40.974 fused_ordering(978) 00:11:40.974 fused_ordering(979) 00:11:40.974 fused_ordering(980) 00:11:40.974 fused_ordering(981) 00:11:40.974 fused_ordering(982) 00:11:40.974 fused_ordering(983) 00:11:40.974 fused_ordering(984) 00:11:40.974 fused_ordering(985) 00:11:40.974 fused_ordering(986) 00:11:40.974 fused_ordering(987) 00:11:40.974 fused_ordering(988) 00:11:40.974 fused_ordering(989) 00:11:40.974 fused_ordering(990) 00:11:40.974 fused_ordering(991) 00:11:40.974 fused_ordering(992) 00:11:40.974 fused_ordering(993) 00:11:40.974 fused_ordering(994) 00:11:40.974 fused_ordering(995) 00:11:40.974 fused_ordering(996) 00:11:40.974 fused_ordering(997) 00:11:40.974 fused_ordering(998) 00:11:40.974 fused_ordering(999) 00:11:40.974 fused_ordering(1000) 00:11:40.974 fused_ordering(1001) 00:11:40.974 fused_ordering(1002) 00:11:40.974 fused_ordering(1003) 00:11:40.974 fused_ordering(1004) 00:11:40.974 fused_ordering(1005) 00:11:40.974 fused_ordering(1006) 00:11:40.974 fused_ordering(1007) 00:11:40.974 fused_ordering(1008) 00:11:40.974 fused_ordering(1009) 00:11:40.974 fused_ordering(1010) 00:11:40.974 fused_ordering(1011) 00:11:40.974 fused_ordering(1012) 00:11:40.974 fused_ordering(1013) 00:11:40.974 fused_ordering(1014) 00:11:40.974 fused_ordering(1015) 00:11:40.974 fused_ordering(1016) 00:11:40.974 fused_ordering(1017) 00:11:40.974 fused_ordering(1018) 00:11:40.974 fused_ordering(1019) 00:11:40.974 fused_ordering(1020) 00:11:40.974 fused_ordering(1021) 00:11:40.974 fused_ordering(1022) 00:11:40.974 fused_ordering(1023) 00:11:40.974 11:38:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:40.974 11:38:08 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:40.974 11:38:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:40.974 11:38:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:40.974 11:38:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:40.974 11:38:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:40.974 11:38:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:40.974 11:38:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:40.975 rmmod nvme_tcp 00:11:40.975 rmmod nvme_fabrics 00:11:40.975 rmmod nvme_keyring 00:11:40.975 11:38:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:40.975 11:38:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:40.975 11:38:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:40.975 11:38:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1871383 ']' 00:11:40.975 11:38:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1871383 00:11:40.975 11:38:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1871383 ']' 00:11:40.975 11:38:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1871383 00:11:40.975 11:38:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:40.975 11:38:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:40.975 11:38:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1871383 00:11:40.975 11:38:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:40.975 11:38:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:40.975 11:38:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1871383' 00:11:40.975 killing process with pid 1871383 00:11:40.975 11:38:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1871383 00:11:40.975 11:38:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1871383 00:11:41.235 11:38:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:41.235 11:38:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:41.235 11:38:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:41.235 11:38:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:41.235 11:38:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:41.235 11:38:09 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.235 11:38:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:41.235 11:38:09 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.242 11:38:11 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:43.242 00:11:43.242 real 0m12.602s 00:11:43.242 user 0m6.450s 00:11:43.242 sys 0m7.153s 00:11:43.242 11:38:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:43.242 11:38:11 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:43.242 ************************************ 00:11:43.242 END TEST nvmf_fused_ordering 00:11:43.242 ************************************ 00:11:43.242 11:38:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:43.242 11:38:11 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:43.242 11:38:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:43.242 11:38:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.242 11:38:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:43.242 ************************************ 00:11:43.242 START TEST nvmf_delete_subsystem 00:11:43.242 ************************************ 00:11:43.242 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:43.501 * Looking for test storage... 00:11:43.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:43.501 11:38:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:50.068 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:50.068 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:50.068 Found net devices under 0000:af:00.0: cvl_0_0 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:50.068 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:50.069 Found net devices under 0000:af:00.1: cvl_0_1 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:50.069 11:38:17 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:50.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:50.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:11:50.069 00:11:50.069 --- 10.0.0.2 ping statistics --- 00:11:50.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.069 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:50.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:50.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:11:50.069 00:11:50.069 --- 10.0.0.1 ping statistics --- 00:11:50.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:50.069 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1875654 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1875654 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1875654 ']' 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:50.069 11:38:18 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:50.327 [2024-07-15 11:38:18.221674] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:11:50.327 [2024-07-15 11:38:18.221725] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.327 EAL: No free 2048 kB hugepages reported on node 1 00:11:50.327 [2024-07-15 11:38:18.297068] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:50.327 [2024-07-15 11:38:18.370657] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.327 [2024-07-15 11:38:18.370694] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.327 [2024-07-15 11:38:18.370704] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.327 [2024-07-15 11:38:18.370715] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.327 [2024-07-15 11:38:18.370722] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.328 [2024-07-15 11:38:18.370774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.328 [2024-07-15 11:38:18.370777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.262 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:51.262 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:11:51.262 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:51.262 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:51.262 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:51.262 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.262 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:51.262 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.262 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:51.262 [2024-07-15 11:38:19.066507] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.262 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.262 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:51.262 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.262 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:51.262 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.262 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:51.262 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.262 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:51.262 [2024-07-15 11:38:19.082659] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.262 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.262 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:51.263 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.263 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:51.263 NULL1 00:11:51.263 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.263 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:51.263 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.263 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:51.263 Delay0 00:11:51.263 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.263 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:51.263 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.263 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:51.263 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.263 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1875893 00:11:51.263 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:51.263 11:38:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:51.263 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.263 [2024-07-15 11:38:19.167299] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:53.189 11:38:21 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.189 11:38:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.189 11:38:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 starting I/O failed: -6 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 starting I/O failed: -6 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 starting I/O failed: -6 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 starting I/O failed: -6 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 starting I/O failed: -6 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 starting I/O failed: -6 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 starting I/O failed: -6 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 starting I/O failed: -6 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 starting I/O failed: -6 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 starting I/O failed: -6 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 starting I/O failed: -6 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 starting I/O failed: -6 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 starting I/O failed: -6 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 starting I/O failed: -6 00:11:53.189 [2024-07-15 11:38:21.247326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157af0 is same with the state(5) to be set 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 starting I/O failed: -6 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 starting I/O failed: -6 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 starting I/O failed: -6 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 starting I/O failed: -6 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 starting I/O failed: -6 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.189 Read completed with error (sct=0, sc=8) 00:11:53.189 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 starting I/O failed: -6 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 starting I/O failed: -6 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 starting I/O failed: -6 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 starting I/O failed: -6 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 starting I/O failed: -6 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 [2024-07-15 11:38:21.247700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff910000c00 is same with the state(5) to be set 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Read completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:53.190 Write completed with error (sct=0, sc=8) 00:11:54.125 [2024-07-15 11:38:22.221639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2158450 is same with the state(5) to be set 00:11:54.382 Write completed with error (sct=0, sc=8) 00:11:54.382 Write completed with error (sct=0, sc=8) 00:11:54.382 Read completed with error (sct=0, sc=8) 00:11:54.382 Read completed with error (sct=0, sc=8) 00:11:54.382 Read completed with error (sct=0, sc=8) 00:11:54.382 Read completed with error (sct=0, sc=8) 00:11:54.382 Read completed with error (sct=0, sc=8) 00:11:54.382 Read completed with error (sct=0, sc=8) 00:11:54.382 Read completed with error (sct=0, sc=8) 00:11:54.382 Read completed with error (sct=0, sc=8) 00:11:54.382 Read completed with error (sct=0, sc=8) 00:11:54.382 Read completed with error (sct=0, sc=8) 00:11:54.382 Write completed with error (sct=0, sc=8) 00:11:54.382 Read completed with error (sct=0, sc=8) 00:11:54.382 Read completed with error (sct=0, sc=8) 00:11:54.383 [2024-07-15 11:38:22.249112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff91000d2f0 is same with the state(5) to be set 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 [2024-07-15 11:38:22.249884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2157910 is same with the state(5) to be set 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 [2024-07-15 11:38:22.250058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b560 is same with the state(5) to be set 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Read completed with error (sct=0, sc=8) 00:11:54.383 Write completed with error (sct=0, sc=8) 00:11:54.383 [2024-07-15 11:38:22.250216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21785c0 is same with the state(5) to be set 00:11:54.383 Initializing NVMe Controllers 00:11:54.383 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:54.383 Controller IO queue size 128, less than required. 00:11:54.383 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:54.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:54.383 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:54.383 Initialization complete. Launching workers. 00:11:54.383 ======================================================== 00:11:54.383 Latency(us) 00:11:54.383 Device Information : IOPS MiB/s Average min max 00:11:54.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 189.23 0.09 949413.15 618.25 1011245.10 00:11:54.383 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.97 0.07 885522.38 251.92 1011972.90 00:11:54.383 ======================================================== 00:11:54.383 Total : 342.20 0.17 920852.40 251.92 1011972.90 00:11:54.383 00:11:54.383 [2024-07-15 11:38:22.250961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2158450 (9): Bad file descriptor 00:11:54.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:54.383 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.383 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:54.383 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1875893 00:11:54.383 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:54.949 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:54.949 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1875893 00:11:54.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1875893) - No such process 00:11:54.949 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1875893 00:11:54.949 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:11:54.949 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1875893 00:11:54.949 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:11:54.949 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:54.949 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:11:54.949 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:54.949 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1875893 00:11:54.949 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:11:54.949 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:54.949 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:54.949 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:54.949 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:54.949 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.949 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:54.949 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.950 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.950 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.950 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:54.950 [2024-07-15 11:38:22.779149] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.950 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.950 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:54.950 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.950 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:54.950 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.950 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1876452 00:11:54.950 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:54.950 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:54.950 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1876452 00:11:54.950 11:38:22 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:54.950 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.950 [2024-07-15 11:38:22.848858] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:55.208 11:38:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:55.208 11:38:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1876452 00:11:55.208 11:38:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:55.774 11:38:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:55.774 11:38:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1876452 00:11:55.774 11:38:23 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:56.341 11:38:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:56.341 11:38:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1876452 00:11:56.341 11:38:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:56.908 11:38:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:56.908 11:38:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1876452 00:11:56.908 11:38:24 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:57.475 11:38:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:57.475 11:38:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1876452 00:11:57.475 11:38:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:57.734 11:38:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:57.734 11:38:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1876452 00:11:57.734 11:38:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:58.301 Initializing NVMe Controllers 00:11:58.301 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:58.301 Controller IO queue size 128, less than required. 00:11:58.301 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:58.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:58.301 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:58.301 Initialization complete. Launching workers. 00:11:58.301 ======================================================== 00:11:58.301 Latency(us) 00:11:58.301 Device Information : IOPS MiB/s Average min max 00:11:58.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003064.22 1000227.46 1009170.53 00:11:58.301 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005393.95 1000381.67 1042118.95 00:11:58.301 ======================================================== 00:11:58.301 Total : 256.00 0.12 1004229.08 1000227.46 1042118.95 00:11:58.301 00:11:58.301 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:58.301 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1876452 00:11:58.301 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1876452) - No such process 00:11:58.301 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1876452 00:11:58.301 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:58.301 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:58.301 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:58.301 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:58.301 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:58.301 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:58.301 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:58.301 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:58.301 rmmod nvme_tcp 00:11:58.301 rmmod nvme_fabrics 00:11:58.301 rmmod nvme_keyring 00:11:58.301 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:58.301 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:58.301 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:58.301 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1875654 ']' 00:11:58.301 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1875654 00:11:58.301 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1875654 ']' 00:11:58.301 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1875654 00:11:58.301 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:11:58.560 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:58.560 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1875654 00:11:58.560 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:58.560 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:58.560 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1875654' 00:11:58.560 killing process with pid 1875654 00:11:58.560 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1875654 00:11:58.560 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1875654 00:11:58.560 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:58.560 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:58.560 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:58.560 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:58.560 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:58.560 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.560 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:58.560 11:38:26 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.098 11:38:28 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:01.098 00:12:01.098 real 0m17.382s 00:12:01.098 user 0m29.574s 00:12:01.098 sys 0m7.064s 00:12:01.098 11:38:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:01.098 11:38:28 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.098 ************************************ 00:12:01.098 END TEST nvmf_delete_subsystem 00:12:01.098 ************************************ 00:12:01.098 11:38:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:01.098 11:38:28 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:01.098 11:38:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:01.098 11:38:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.098 11:38:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:01.098 ************************************ 00:12:01.098 START TEST nvmf_ns_masking 00:12:01.098 ************************************ 00:12:01.098 11:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:01.098 * Looking for test storage... 00:12:01.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.098 11:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.098 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:01.098 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.098 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.098 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.098 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.098 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.098 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.098 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.098 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.098 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.098 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.098 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:01.098 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:01.098 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.098 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.098 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.098 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.098 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=976a0911-fbc1-45dc-8928-04e27880e44e 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=50b89413-9a06-4ac1-a3aa-dbc4f079b2c2 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=82029c34-cbb7-4fff-8b03-4fc95aaea4af 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:12:01.099 11:38:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:07.664 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:07.664 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:07.664 Found net devices under 0000:af:00.0: cvl_0_0 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:07.664 Found net devices under 0000:af:00.1: cvl_0_1 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:07.664 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:07.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:12:07.665 00:12:07.665 --- 10.0.0.2 ping statistics --- 00:12:07.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.665 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:12:07.665 00:12:07.665 --- 10.0.0.1 ping statistics --- 00:12:07.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.665 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1880646 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1880646 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1880646 ']' 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:07.665 11:38:34 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:07.665 [2024-07-15 11:38:34.972305] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:12:07.665 [2024-07-15 11:38:34.972353] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.665 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.665 [2024-07-15 11:38:35.047938] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.665 [2024-07-15 11:38:35.120099] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.665 [2024-07-15 11:38:35.120138] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.665 [2024-07-15 11:38:35.120147] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.665 [2024-07-15 11:38:35.120155] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.665 [2024-07-15 11:38:35.120178] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.665 [2024-07-15 11:38:35.120199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.665 11:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:07.665 11:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:07.665 11:38:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:07.665 11:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:07.665 11:38:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:07.924 11:38:35 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.924 11:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:07.924 [2024-07-15 11:38:35.959164] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.924 11:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:07.924 11:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:07.924 11:38:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:08.183 Malloc1 00:12:08.183 11:38:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:08.459 Malloc2 00:12:08.459 11:38:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:08.459 11:38:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:08.731 11:38:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.731 [2024-07-15 11:38:36.824913] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.991 11:38:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:08.991 11:38:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 82029c34-cbb7-4fff-8b03-4fc95aaea4af -a 10.0.0.2 -s 4420 -i 4 00:12:08.991 11:38:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:08.991 11:38:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:08.991 11:38:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:08.991 11:38:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:08.991 11:38:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:11.527 [ 0]:0x1 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=576b063fc5c946cc994af6bde7b625af 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 576b063fc5c946cc994af6bde7b625af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:11.527 [ 0]:0x1 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=576b063fc5c946cc994af6bde7b625af 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 576b063fc5c946cc994af6bde7b625af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:11.527 [ 1]:0x2 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=22f7cdb223b24c9e96ad9246e4dd79c6 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 22f7cdb223b24c9e96ad9246e4dd79c6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:11.527 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:11.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.787 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.787 11:38:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:12.046 11:38:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:12.046 11:38:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 82029c34-cbb7-4fff-8b03-4fc95aaea4af -a 10.0.0.2 -s 4420 -i 4 00:12:12.046 11:38:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:12.046 11:38:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:12.046 11:38:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.046 11:38:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:12.046 11:38:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:12.046 11:38:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:14.580 [ 0]:0x2 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=22f7cdb223b24c9e96ad9246e4dd79c6 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 22f7cdb223b24c9e96ad9246e4dd79c6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:14.580 [ 0]:0x1 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=576b063fc5c946cc994af6bde7b625af 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 576b063fc5c946cc994af6bde7b625af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:14.580 [ 1]:0x2 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:14.580 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:14.839 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=22f7cdb223b24c9e96ad9246e4dd79c6 00:12:14.839 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 22f7cdb223b24c9e96ad9246e4dd79c6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:14.840 [ 0]:0x2 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:14.840 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:15.099 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=22f7cdb223b24c9e96ad9246e4dd79c6 00:12:15.099 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 22f7cdb223b24c9e96ad9246e4dd79c6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:15.099 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:15.099 11:38:42 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:15.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.099 11:38:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:15.099 11:38:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:15.099 11:38:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 82029c34-cbb7-4fff-8b03-4fc95aaea4af -a 10.0.0.2 -s 4420 -i 4 00:12:15.358 11:38:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:15.358 11:38:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:15.358 11:38:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.358 11:38:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:15.358 11:38:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:15.358 11:38:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:17.895 [ 0]:0x1 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=576b063fc5c946cc994af6bde7b625af 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 576b063fc5c946cc994af6bde7b625af != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:17.895 [ 1]:0x2 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=22f7cdb223b24c9e96ad9246e4dd79c6 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 22f7cdb223b24c9e96ad9246e4dd79c6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:17.895 [ 0]:0x2 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=22f7cdb223b24c9e96ad9246e4dd79c6 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 22f7cdb223b24c9e96ad9246e4dd79c6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:17.895 11:38:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:18.154 [2024-07-15 11:38:46.043300] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:18.154 request: 00:12:18.154 { 00:12:18.154 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:18.154 "nsid": 2, 00:12:18.154 "host": "nqn.2016-06.io.spdk:host1", 00:12:18.154 "method": "nvmf_ns_remove_host", 00:12:18.154 "req_id": 1 00:12:18.154 } 00:12:18.154 Got JSON-RPC error response 00:12:18.154 response: 00:12:18.154 { 00:12:18.154 "code": -32602, 00:12:18.154 "message": "Invalid parameters" 00:12:18.154 } 00:12:18.154 11:38:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:18.154 11:38:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:18.154 11:38:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:18.154 11:38:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:18.154 11:38:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:18.154 11:38:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:18.154 11:38:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:18.154 11:38:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:18.154 11:38:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.154 11:38:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:18.154 11:38:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.154 11:38:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:18.154 11:38:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:18.154 11:38:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:18.154 11:38:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:18.154 11:38:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:18.154 11:38:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:18.154 11:38:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:18.154 11:38:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:18.155 11:38:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:18.155 11:38:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:18.155 11:38:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:18.155 11:38:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:18.155 11:38:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:18.155 11:38:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:18.155 [ 0]:0x2 00:12:18.155 11:38:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:18.155 11:38:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:18.155 11:38:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=22f7cdb223b24c9e96ad9246e4dd79c6 00:12:18.155 11:38:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 22f7cdb223b24c9e96ad9246e4dd79c6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:18.155 11:38:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:18.155 11:38:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:18.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.414 11:38:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1882698 00:12:18.414 11:38:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:18.414 11:38:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:18.414 11:38:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1882698 /var/tmp/host.sock 00:12:18.414 11:38:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1882698 ']' 00:12:18.414 11:38:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:18.414 11:38:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:18.414 11:38:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:18.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:18.414 11:38:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:18.414 11:38:46 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:18.414 [2024-07-15 11:38:46.322448] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:12:18.414 [2024-07-15 11:38:46.322498] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1882698 ] 00:12:18.414 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.414 [2024-07-15 11:38:46.392992] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.414 [2024-07-15 11:38:46.462523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:19.350 11:38:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:19.350 11:38:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:19.350 11:38:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.351 11:38:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:19.351 11:38:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 976a0911-fbc1-45dc-8928-04e27880e44e 00:12:19.351 11:38:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:19.351 11:38:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 976A0911FBC145DC892804E27880E44E -i 00:12:19.609 11:38:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 50b89413-9a06-4ac1-a3aa-dbc4f079b2c2 00:12:19.609 11:38:47 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:19.609 11:38:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 50B894139A064AC1A3AADBC4F079B2C2 -i 00:12:19.868 11:38:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:19.868 11:38:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:20.127 11:38:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:20.127 11:38:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:20.386 nvme0n1 00:12:20.386 11:38:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:20.386 11:38:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:20.645 nvme1n2 00:12:20.645 11:38:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:20.645 11:38:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:20.645 11:38:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:20.645 11:38:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:20.645 11:38:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:20.904 11:38:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:20.904 11:38:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:20.904 11:38:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:20.904 11:38:48 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:21.163 11:38:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 976a0911-fbc1-45dc-8928-04e27880e44e == \9\7\6\a\0\9\1\1\-\f\b\c\1\-\4\5\d\c\-\8\9\2\8\-\0\4\e\2\7\8\8\0\e\4\4\e ]] 00:12:21.163 11:38:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:21.163 11:38:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:21.163 11:38:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:21.163 11:38:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 50b89413-9a06-4ac1-a3aa-dbc4f079b2c2 == \5\0\b\8\9\4\1\3\-\9\a\0\6\-\4\a\c\1\-\a\3\a\a\-\d\b\c\4\f\0\7\9\b\2\c\2 ]] 00:12:21.163 11:38:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1882698 00:12:21.163 11:38:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1882698 ']' 00:12:21.163 11:38:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1882698 00:12:21.163 11:38:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:21.422 11:38:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:21.422 11:38:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1882698 00:12:21.422 11:38:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:21.422 11:38:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:21.422 11:38:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1882698' 00:12:21.422 killing process with pid 1882698 00:12:21.422 11:38:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1882698 00:12:21.422 11:38:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1882698 00:12:21.682 11:38:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:21.941 rmmod nvme_tcp 00:12:21.941 rmmod nvme_fabrics 00:12:21.941 rmmod nvme_keyring 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1880646 ']' 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1880646 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1880646 ']' 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1880646 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1880646 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1880646' 00:12:21.941 killing process with pid 1880646 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1880646 00:12:21.941 11:38:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1880646 00:12:22.200 11:38:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:22.200 11:38:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:22.200 11:38:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:22.200 11:38:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:22.201 11:38:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:22.201 11:38:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.201 11:38:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.201 11:38:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.738 11:38:52 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:24.738 00:12:24.738 real 0m23.439s 00:12:24.738 user 0m23.595s 00:12:24.738 sys 0m7.518s 00:12:24.738 11:38:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:24.738 11:38:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:24.738 ************************************ 00:12:24.738 END TEST nvmf_ns_masking 00:12:24.738 ************************************ 00:12:24.738 11:38:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:24.738 11:38:52 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:24.738 11:38:52 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:24.738 11:38:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:24.738 11:38:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:24.738 11:38:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:24.738 ************************************ 00:12:24.738 START TEST nvmf_nvme_cli 00:12:24.738 ************************************ 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:24.738 * Looking for test storage... 00:12:24.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:24.738 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:24.739 11:38:52 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:24.739 11:38:52 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:31.328 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:31.328 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.328 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:31.329 Found net devices under 0000:af:00.0: cvl_0_0 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:31.329 Found net devices under 0000:af:00.1: cvl_0_1 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:31.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:12:31.329 00:12:31.329 --- 10.0.0.2 ping statistics --- 00:12:31.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.329 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:12:31.329 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.589 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.589 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:12:31.589 00:12:31.589 --- 10.0.0.1 ping statistics --- 00:12:31.589 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.589 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:12:31.589 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.589 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:31.589 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:31.589 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.589 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:31.589 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:31.589 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.589 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:31.589 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:31.589 11:38:59 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:31.589 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:31.589 11:38:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:31.589 11:38:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.589 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:31.589 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1887154 00:12:31.589 11:38:59 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1887154 00:12:31.589 11:38:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1887154 ']' 00:12:31.589 11:38:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.589 11:38:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:31.589 11:38:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.589 11:38:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:31.589 11:38:59 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:31.589 [2024-07-15 11:38:59.506096] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:12:31.589 [2024-07-15 11:38:59.506142] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.589 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.589 [2024-07-15 11:38:59.582373] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.589 [2024-07-15 11:38:59.660130] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.589 [2024-07-15 11:38:59.660167] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.589 [2024-07-15 11:38:59.660176] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.589 [2024-07-15 11:38:59.660185] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.589 [2024-07-15 11:38:59.660192] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.589 [2024-07-15 11:38:59.660237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.589 [2024-07-15 11:38:59.660315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.589 [2024-07-15 11:38:59.660387] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.589 [2024-07-15 11:38:59.660389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.527 [2024-07-15 11:39:00.367900] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.527 Malloc0 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.527 Malloc1 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.527 [2024-07-15 11:39:00.451839] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.527 11:39:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -a 10.0.0.2 -s 4420 00:12:32.528 00:12:32.528 Discovery Log Number of Records 2, Generation counter 2 00:12:32.528 =====Discovery Log Entry 0====== 00:12:32.528 trtype: tcp 00:12:32.528 adrfam: ipv4 00:12:32.528 subtype: current discovery subsystem 00:12:32.528 treq: not required 00:12:32.528 portid: 0 00:12:32.528 trsvcid: 4420 00:12:32.528 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:32.528 traddr: 10.0.0.2 00:12:32.528 eflags: explicit discovery connections, duplicate discovery information 00:12:32.528 sectype: none 00:12:32.528 =====Discovery Log Entry 1====== 00:12:32.528 trtype: tcp 00:12:32.528 adrfam: ipv4 00:12:32.528 subtype: nvme subsystem 00:12:32.528 treq: not required 00:12:32.528 portid: 0 00:12:32.528 trsvcid: 4420 00:12:32.528 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:32.528 traddr: 10.0.0.2 00:12:32.528 eflags: none 00:12:32.528 sectype: none 00:12:32.528 11:39:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:32.788 11:39:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:32.788 11:39:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:32.788 11:39:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:32.788 11:39:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:32.788 11:39:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:32.788 11:39:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:32.788 11:39:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:32.788 11:39:00 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:32.788 11:39:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:32.788 11:39:00 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.202 11:39:01 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:34.202 11:39:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:34.202 11:39:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.202 11:39:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:34.202 11:39:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:34.202 11:39:01 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:36.104 11:39:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:36.104 11:39:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:36.104 11:39:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.104 11:39:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:36.104 11:39:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.104 11:39:03 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:36.104 11:39:03 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:36.104 11:39:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:36.105 11:39:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:36.105 11:39:03 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:36.105 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:36.105 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:36.105 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:36.105 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:36.105 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:36.105 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:36.105 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:36.105 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:36.105 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:36.105 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:36.105 11:39:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:36.105 /dev/nvme0n1 ]] 00:12:36.105 11:39:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:36.105 11:39:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:36.105 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:36.105 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:36.105 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:36.364 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:36.364 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:36.364 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:36.364 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:36.364 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:36.364 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:36.364 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:36.364 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:36.364 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:36.364 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:36.364 11:39:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:36.364 11:39:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.623 11:39:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:36.623 11:39:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:36.623 11:39:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:36.623 11:39:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.623 11:39:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:36.623 11:39:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.623 11:39:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:36.623 11:39:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:36.623 11:39:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.623 11:39:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.624 11:39:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:36.624 11:39:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.624 11:39:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:36.624 11:39:04 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:36.624 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:36.624 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:36.624 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:36.624 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:36.624 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:36.624 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:36.624 rmmod nvme_tcp 00:12:36.624 rmmod nvme_fabrics 00:12:36.624 rmmod nvme_keyring 00:12:36.624 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:36.624 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:36.624 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:36.624 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1887154 ']' 00:12:36.624 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1887154 00:12:36.624 11:39:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1887154 ']' 00:12:36.624 11:39:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1887154 00:12:36.624 11:39:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:36.624 11:39:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:36.624 11:39:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1887154 00:12:36.624 11:39:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:36.883 11:39:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:36.883 11:39:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1887154' 00:12:36.883 killing process with pid 1887154 00:12:36.883 11:39:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1887154 00:12:36.883 11:39:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1887154 00:12:36.883 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:36.883 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:36.883 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:36.883 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:36.883 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:36.883 11:39:04 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.883 11:39:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.883 11:39:04 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.421 11:39:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:39.421 00:12:39.421 real 0m14.708s 00:12:39.421 user 0m22.807s 00:12:39.421 sys 0m6.189s 00:12:39.421 11:39:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:39.421 11:39:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:39.421 ************************************ 00:12:39.421 END TEST nvmf_nvme_cli 00:12:39.421 ************************************ 00:12:39.421 11:39:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:39.421 11:39:07 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:39.421 11:39:07 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:39.421 11:39:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:39.421 11:39:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.421 11:39:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:39.421 ************************************ 00:12:39.422 START TEST nvmf_vfio_user 00:12:39.422 ************************************ 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:39.422 * Looking for test storage... 00:12:39.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1889152 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1889152' 00:12:39.422 Process pid: 1889152 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1889152 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1889152 ']' 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:39.422 11:39:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:39.422 [2024-07-15 11:39:07.314345] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:12:39.422 [2024-07-15 11:39:07.314394] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.422 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.422 [2024-07-15 11:39:07.383880] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.422 [2024-07-15 11:39:07.456702] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.422 [2024-07-15 11:39:07.456742] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.422 [2024-07-15 11:39:07.456751] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.422 [2024-07-15 11:39:07.456760] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.422 [2024-07-15 11:39:07.456767] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.422 [2024-07-15 11:39:07.456853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.422 [2024-07-15 11:39:07.456911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.422 [2024-07-15 11:39:07.456995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.422 [2024-07-15 11:39:07.456997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.360 11:39:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:40.360 11:39:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:40.360 11:39:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:41.298 11:39:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:41.298 11:39:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:41.298 11:39:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:41.298 11:39:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:41.298 11:39:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:41.298 11:39:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:41.557 Malloc1 00:12:41.557 11:39:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:41.816 11:39:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:41.816 11:39:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:42.076 11:39:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:42.076 11:39:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:42.076 11:39:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:42.335 Malloc2 00:12:42.335 11:39:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:42.594 11:39:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:42.594 11:39:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:42.854 11:39:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:42.854 11:39:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:42.854 11:39:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:42.854 11:39:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:42.854 11:39:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:42.854 11:39:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:42.854 [2024-07-15 11:39:10.845965] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:12:42.854 [2024-07-15 11:39:10.846006] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1889712 ] 00:12:42.854 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.854 [2024-07-15 11:39:10.878189] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:42.854 [2024-07-15 11:39:10.888164] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:42.854 [2024-07-15 11:39:10.888186] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb431795000 00:12:42.854 [2024-07-15 11:39:10.889164] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:42.854 [2024-07-15 11:39:10.890165] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:42.855 [2024-07-15 11:39:10.891173] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:42.855 [2024-07-15 11:39:10.892179] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:42.855 [2024-07-15 11:39:10.893184] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:42.855 [2024-07-15 11:39:10.894187] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:42.855 [2024-07-15 11:39:10.895196] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:42.855 [2024-07-15 11:39:10.896198] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:42.855 [2024-07-15 11:39:10.897204] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:42.855 [2024-07-15 11:39:10.897218] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb43178a000 00:12:42.855 [2024-07-15 11:39:10.898114] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:42.855 [2024-07-15 11:39:10.906417] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:42.855 [2024-07-15 11:39:10.906447] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:42.855 [2024-07-15 11:39:10.911308] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:42.855 [2024-07-15 11:39:10.911352] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:42.855 [2024-07-15 11:39:10.911427] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:42.855 [2024-07-15 11:39:10.911450] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:42.855 [2024-07-15 11:39:10.911457] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:42.855 [2024-07-15 11:39:10.912301] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:42.855 [2024-07-15 11:39:10.912312] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:42.855 [2024-07-15 11:39:10.912321] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:42.855 [2024-07-15 11:39:10.913304] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:42.855 [2024-07-15 11:39:10.913315] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:42.855 [2024-07-15 11:39:10.913324] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:42.855 [2024-07-15 11:39:10.914308] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:42.855 [2024-07-15 11:39:10.914318] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:42.855 [2024-07-15 11:39:10.915313] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:42.855 [2024-07-15 11:39:10.915323] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:42.855 [2024-07-15 11:39:10.915330] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:42.855 [2024-07-15 11:39:10.915338] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:42.855 [2024-07-15 11:39:10.915445] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:42.855 [2024-07-15 11:39:10.915455] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:42.855 [2024-07-15 11:39:10.915462] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:42.855 [2024-07-15 11:39:10.916323] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:42.855 [2024-07-15 11:39:10.917328] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:42.855 [2024-07-15 11:39:10.918336] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:42.855 [2024-07-15 11:39:10.919337] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:42.855 [2024-07-15 11:39:10.919426] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:42.855 [2024-07-15 11:39:10.920356] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:42.855 [2024-07-15 11:39:10.920367] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:42.855 [2024-07-15 11:39:10.920373] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:42.855 [2024-07-15 11:39:10.920392] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:42.855 [2024-07-15 11:39:10.920402] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:42.855 [2024-07-15 11:39:10.920419] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:42.855 [2024-07-15 11:39:10.920426] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:42.855 [2024-07-15 11:39:10.920441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:42.855 [2024-07-15 11:39:10.920488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:42.855 [2024-07-15 11:39:10.920500] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:42.855 [2024-07-15 11:39:10.920511] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:42.855 [2024-07-15 11:39:10.920517] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:42.855 [2024-07-15 11:39:10.920523] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:42.855 [2024-07-15 11:39:10.920530] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:42.855 [2024-07-15 11:39:10.920536] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:42.855 [2024-07-15 11:39:10.920542] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:42.855 [2024-07-15 11:39:10.920552] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:42.855 [2024-07-15 11:39:10.920563] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:42.855 [2024-07-15 11:39:10.920574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:42.855 [2024-07-15 11:39:10.920591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.855 [2024-07-15 11:39:10.920601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.855 [2024-07-15 11:39:10.920610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.855 [2024-07-15 11:39:10.920619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.855 [2024-07-15 11:39:10.920626] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:42.855 [2024-07-15 11:39:10.920637] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:42.855 [2024-07-15 11:39:10.920647] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:42.855 [2024-07-15 11:39:10.920656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:42.855 [2024-07-15 11:39:10.920663] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:42.855 [2024-07-15 11:39:10.920670] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:42.855 [2024-07-15 11:39:10.920678] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:42.855 [2024-07-15 11:39:10.920686] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:42.855 [2024-07-15 11:39:10.920696] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:42.855 [2024-07-15 11:39:10.920704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:42.855 [2024-07-15 11:39:10.920754] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:42.855 [2024-07-15 11:39:10.920764] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:42.855 [2024-07-15 11:39:10.920772] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:42.855 [2024-07-15 11:39:10.920778] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:42.855 [2024-07-15 11:39:10.920785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:42.855 [2024-07-15 11:39:10.920796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:42.855 [2024-07-15 11:39:10.920808] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:42.855 [2024-07-15 11:39:10.920821] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:42.855 [2024-07-15 11:39:10.920837] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:42.855 [2024-07-15 11:39:10.920846] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:42.855 [2024-07-15 11:39:10.920852] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:42.855 [2024-07-15 11:39:10.920860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:42.855 [2024-07-15 11:39:10.920878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:42.855 [2024-07-15 11:39:10.920892] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:42.856 [2024-07-15 11:39:10.920902] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:42.856 [2024-07-15 11:39:10.920910] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:42.856 [2024-07-15 11:39:10.920916] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:42.856 [2024-07-15 11:39:10.920923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:42.856 [2024-07-15 11:39:10.920933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:42.856 [2024-07-15 11:39:10.920944] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:42.856 [2024-07-15 11:39:10.920952] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:42.856 [2024-07-15 11:39:10.920962] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:42.856 [2024-07-15 11:39:10.920969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:42.856 [2024-07-15 11:39:10.920976] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:42.856 [2024-07-15 11:39:10.920982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:42.856 [2024-07-15 11:39:10.920989] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:42.856 [2024-07-15 11:39:10.920995] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:42.856 [2024-07-15 11:39:10.921002] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:42.856 [2024-07-15 11:39:10.921022] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:42.856 [2024-07-15 11:39:10.921035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:42.856 [2024-07-15 11:39:10.921049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:42.856 [2024-07-15 11:39:10.921057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:42.856 [2024-07-15 11:39:10.921070] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:42.856 [2024-07-15 11:39:10.921081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:42.856 [2024-07-15 11:39:10.921094] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:42.856 [2024-07-15 11:39:10.921106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:42.856 [2024-07-15 11:39:10.921123] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:42.856 [2024-07-15 11:39:10.921129] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:42.856 [2024-07-15 11:39:10.921134] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:42.856 [2024-07-15 11:39:10.921139] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:42.856 [2024-07-15 11:39:10.921146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:42.856 [2024-07-15 11:39:10.921154] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:42.856 [2024-07-15 11:39:10.921160] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:42.856 [2024-07-15 11:39:10.921167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:42.856 [2024-07-15 11:39:10.921176] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:42.856 [2024-07-15 11:39:10.921182] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:42.856 [2024-07-15 11:39:10.921188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:42.856 [2024-07-15 11:39:10.921197] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:42.856 [2024-07-15 11:39:10.921203] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:42.856 [2024-07-15 11:39:10.921210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:42.856 [2024-07-15 11:39:10.921218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:42.856 [2024-07-15 11:39:10.921232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:42.856 [2024-07-15 11:39:10.921245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:42.856 [2024-07-15 11:39:10.921254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:42.856 ===================================================== 00:12:42.856 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:42.856 ===================================================== 00:12:42.856 Controller Capabilities/Features 00:12:42.856 ================================ 00:12:42.856 Vendor ID: 4e58 00:12:42.856 Subsystem Vendor ID: 4e58 00:12:42.856 Serial Number: SPDK1 00:12:42.856 Model Number: SPDK bdev Controller 00:12:42.856 Firmware Version: 24.09 00:12:42.856 Recommended Arb Burst: 6 00:12:42.856 IEEE OUI Identifier: 8d 6b 50 00:12:42.856 Multi-path I/O 00:12:42.856 May have multiple subsystem ports: Yes 00:12:42.856 May have multiple controllers: Yes 00:12:42.856 Associated with SR-IOV VF: No 00:12:42.856 Max Data Transfer Size: 131072 00:12:42.856 Max Number of Namespaces: 32 00:12:42.856 Max Number of I/O Queues: 127 00:12:42.856 NVMe Specification Version (VS): 1.3 00:12:42.856 NVMe Specification Version (Identify): 1.3 00:12:42.856 Maximum Queue Entries: 256 00:12:42.856 Contiguous Queues Required: Yes 00:12:42.856 Arbitration Mechanisms Supported 00:12:42.856 Weighted Round Robin: Not Supported 00:12:42.856 Vendor Specific: Not Supported 00:12:42.856 Reset Timeout: 15000 ms 00:12:42.856 Doorbell Stride: 4 bytes 00:12:42.856 NVM Subsystem Reset: Not Supported 00:12:42.856 Command Sets Supported 00:12:42.856 NVM Command Set: Supported 00:12:42.856 Boot Partition: Not Supported 00:12:42.856 Memory Page Size Minimum: 4096 bytes 00:12:42.856 Memory Page Size Maximum: 4096 bytes 00:12:42.856 Persistent Memory Region: Not Supported 00:12:42.856 Optional Asynchronous Events Supported 00:12:42.856 Namespace Attribute Notices: Supported 00:12:42.856 Firmware Activation Notices: Not Supported 00:12:42.856 ANA Change Notices: Not Supported 00:12:42.856 PLE Aggregate Log Change Notices: Not Supported 00:12:42.856 LBA Status Info Alert Notices: Not Supported 00:12:42.856 EGE Aggregate Log Change Notices: Not Supported 00:12:42.856 Normal NVM Subsystem Shutdown event: Not Supported 00:12:42.856 Zone Descriptor Change Notices: Not Supported 00:12:42.856 Discovery Log Change Notices: Not Supported 00:12:42.856 Controller Attributes 00:12:42.856 128-bit Host Identifier: Supported 00:12:42.856 Non-Operational Permissive Mode: Not Supported 00:12:42.856 NVM Sets: Not Supported 00:12:42.856 Read Recovery Levels: Not Supported 00:12:42.856 Endurance Groups: Not Supported 00:12:42.856 Predictable Latency Mode: Not Supported 00:12:42.856 Traffic Based Keep ALive: Not Supported 00:12:42.856 Namespace Granularity: Not Supported 00:12:42.856 SQ Associations: Not Supported 00:12:42.856 UUID List: Not Supported 00:12:42.856 Multi-Domain Subsystem: Not Supported 00:12:42.856 Fixed Capacity Management: Not Supported 00:12:42.856 Variable Capacity Management: Not Supported 00:12:42.856 Delete Endurance Group: Not Supported 00:12:42.856 Delete NVM Set: Not Supported 00:12:42.856 Extended LBA Formats Supported: Not Supported 00:12:42.856 Flexible Data Placement Supported: Not Supported 00:12:42.856 00:12:42.856 Controller Memory Buffer Support 00:12:42.856 ================================ 00:12:42.856 Supported: No 00:12:42.856 00:12:42.856 Persistent Memory Region Support 00:12:42.856 ================================ 00:12:42.856 Supported: No 00:12:42.856 00:12:42.856 Admin Command Set Attributes 00:12:42.856 ============================ 00:12:42.856 Security Send/Receive: Not Supported 00:12:42.856 Format NVM: Not Supported 00:12:42.856 Firmware Activate/Download: Not Supported 00:12:42.856 Namespace Management: Not Supported 00:12:42.856 Device Self-Test: Not Supported 00:12:42.856 Directives: Not Supported 00:12:42.856 NVMe-MI: Not Supported 00:12:42.856 Virtualization Management: Not Supported 00:12:42.856 Doorbell Buffer Config: Not Supported 00:12:42.856 Get LBA Status Capability: Not Supported 00:12:42.856 Command & Feature Lockdown Capability: Not Supported 00:12:42.856 Abort Command Limit: 4 00:12:42.856 Async Event Request Limit: 4 00:12:42.856 Number of Firmware Slots: N/A 00:12:42.856 Firmware Slot 1 Read-Only: N/A 00:12:42.856 Firmware Activation Without Reset: N/A 00:12:42.856 Multiple Update Detection Support: N/A 00:12:42.856 Firmware Update Granularity: No Information Provided 00:12:42.856 Per-Namespace SMART Log: No 00:12:42.856 Asymmetric Namespace Access Log Page: Not Supported 00:12:42.856 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:42.856 Command Effects Log Page: Supported 00:12:42.856 Get Log Page Extended Data: Supported 00:12:42.856 Telemetry Log Pages: Not Supported 00:12:42.856 Persistent Event Log Pages: Not Supported 00:12:42.856 Supported Log Pages Log Page: May Support 00:12:42.856 Commands Supported & Effects Log Page: Not Supported 00:12:42.856 Feature Identifiers & Effects Log Page:May Support 00:12:42.856 NVMe-MI Commands & Effects Log Page: May Support 00:12:42.856 Data Area 4 for Telemetry Log: Not Supported 00:12:42.856 Error Log Page Entries Supported: 128 00:12:42.856 Keep Alive: Supported 00:12:42.856 Keep Alive Granularity: 10000 ms 00:12:42.856 00:12:42.856 NVM Command Set Attributes 00:12:42.856 ========================== 00:12:42.856 Submission Queue Entry Size 00:12:42.856 Max: 64 00:12:42.856 Min: 64 00:12:42.857 Completion Queue Entry Size 00:12:42.857 Max: 16 00:12:42.857 Min: 16 00:12:42.857 Number of Namespaces: 32 00:12:42.857 Compare Command: Supported 00:12:42.857 Write Uncorrectable Command: Not Supported 00:12:42.857 Dataset Management Command: Supported 00:12:42.857 Write Zeroes Command: Supported 00:12:42.857 Set Features Save Field: Not Supported 00:12:42.857 Reservations: Not Supported 00:12:42.857 Timestamp: Not Supported 00:12:42.857 Copy: Supported 00:12:42.857 Volatile Write Cache: Present 00:12:42.857 Atomic Write Unit (Normal): 1 00:12:42.857 Atomic Write Unit (PFail): 1 00:12:42.857 Atomic Compare & Write Unit: 1 00:12:42.857 Fused Compare & Write: Supported 00:12:42.857 Scatter-Gather List 00:12:42.857 SGL Command Set: Supported (Dword aligned) 00:12:42.857 SGL Keyed: Not Supported 00:12:42.857 SGL Bit Bucket Descriptor: Not Supported 00:12:42.857 SGL Metadata Pointer: Not Supported 00:12:42.857 Oversized SGL: Not Supported 00:12:42.857 SGL Metadata Address: Not Supported 00:12:42.857 SGL Offset: Not Supported 00:12:42.857 Transport SGL Data Block: Not Supported 00:12:42.857 Replay Protected Memory Block: Not Supported 00:12:42.857 00:12:42.857 Firmware Slot Information 00:12:42.857 ========================= 00:12:42.857 Active slot: 1 00:12:42.857 Slot 1 Firmware Revision: 24.09 00:12:42.857 00:12:42.857 00:12:42.857 Commands Supported and Effects 00:12:42.857 ============================== 00:12:42.857 Admin Commands 00:12:42.857 -------------- 00:12:42.857 Get Log Page (02h): Supported 00:12:42.857 Identify (06h): Supported 00:12:42.857 Abort (08h): Supported 00:12:42.857 Set Features (09h): Supported 00:12:42.857 Get Features (0Ah): Supported 00:12:42.857 Asynchronous Event Request (0Ch): Supported 00:12:42.857 Keep Alive (18h): Supported 00:12:42.857 I/O Commands 00:12:42.857 ------------ 00:12:42.857 Flush (00h): Supported LBA-Change 00:12:42.857 Write (01h): Supported LBA-Change 00:12:42.857 Read (02h): Supported 00:12:42.857 Compare (05h): Supported 00:12:42.857 Write Zeroes (08h): Supported LBA-Change 00:12:42.857 Dataset Management (09h): Supported LBA-Change 00:12:42.857 Copy (19h): Supported LBA-Change 00:12:42.857 00:12:42.857 Error Log 00:12:42.857 ========= 00:12:42.857 00:12:42.857 Arbitration 00:12:42.857 =========== 00:12:42.857 Arbitration Burst: 1 00:12:42.857 00:12:42.857 Power Management 00:12:42.857 ================ 00:12:42.857 Number of Power States: 1 00:12:42.857 Current Power State: Power State #0 00:12:42.857 Power State #0: 00:12:42.857 Max Power: 0.00 W 00:12:42.857 Non-Operational State: Operational 00:12:42.857 Entry Latency: Not Reported 00:12:42.857 Exit Latency: Not Reported 00:12:42.857 Relative Read Throughput: 0 00:12:42.857 Relative Read Latency: 0 00:12:42.857 Relative Write Throughput: 0 00:12:42.857 Relative Write Latency: 0 00:12:42.857 Idle Power: Not Reported 00:12:42.857 Active Power: Not Reported 00:12:42.857 Non-Operational Permissive Mode: Not Supported 00:12:42.857 00:12:42.857 Health Information 00:12:42.857 ================== 00:12:42.857 Critical Warnings: 00:12:42.857 Available Spare Space: OK 00:12:42.857 Temperature: OK 00:12:42.857 Device Reliability: OK 00:12:42.857 Read Only: No 00:12:42.857 Volatile Memory Backup: OK 00:12:42.857 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:42.857 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:42.857 Available Spare: 0% 00:12:42.857 Available Sp[2024-07-15 11:39:10.921441] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:42.857 [2024-07-15 11:39:10.921451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:42.857 [2024-07-15 11:39:10.921483] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:42.857 [2024-07-15 11:39:10.921495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.857 [2024-07-15 11:39:10.921503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.857 [2024-07-15 11:39:10.921510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.857 [2024-07-15 11:39:10.921518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.857 [2024-07-15 11:39:10.924846] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:42.857 [2024-07-15 11:39:10.924861] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:42.857 [2024-07-15 11:39:10.925381] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:42.857 [2024-07-15 11:39:10.925445] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:42.857 [2024-07-15 11:39:10.925454] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:42.857 [2024-07-15 11:39:10.926386] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:42.857 [2024-07-15 11:39:10.926400] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:42.857 [2024-07-15 11:39:10.926452] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:42.857 [2024-07-15 11:39:10.927421] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:43.116 are Threshold: 0% 00:12:43.116 Life Percentage Used: 0% 00:12:43.116 Data Units Read: 0 00:12:43.116 Data Units Written: 0 00:12:43.116 Host Read Commands: 0 00:12:43.116 Host Write Commands: 0 00:12:43.116 Controller Busy Time: 0 minutes 00:12:43.116 Power Cycles: 0 00:12:43.116 Power On Hours: 0 hours 00:12:43.116 Unsafe Shutdowns: 0 00:12:43.116 Unrecoverable Media Errors: 0 00:12:43.116 Lifetime Error Log Entries: 0 00:12:43.116 Warning Temperature Time: 0 minutes 00:12:43.116 Critical Temperature Time: 0 minutes 00:12:43.116 00:12:43.116 Number of Queues 00:12:43.116 ================ 00:12:43.116 Number of I/O Submission Queues: 127 00:12:43.116 Number of I/O Completion Queues: 127 00:12:43.116 00:12:43.116 Active Namespaces 00:12:43.116 ================= 00:12:43.116 Namespace ID:1 00:12:43.116 Error Recovery Timeout: Unlimited 00:12:43.116 Command Set Identifier: NVM (00h) 00:12:43.116 Deallocate: Supported 00:12:43.116 Deallocated/Unwritten Error: Not Supported 00:12:43.116 Deallocated Read Value: Unknown 00:12:43.116 Deallocate in Write Zeroes: Not Supported 00:12:43.116 Deallocated Guard Field: 0xFFFF 00:12:43.116 Flush: Supported 00:12:43.116 Reservation: Supported 00:12:43.116 Namespace Sharing Capabilities: Multiple Controllers 00:12:43.116 Size (in LBAs): 131072 (0GiB) 00:12:43.116 Capacity (in LBAs): 131072 (0GiB) 00:12:43.116 Utilization (in LBAs): 131072 (0GiB) 00:12:43.116 NGUID: D149E89398DC4F8781725CED1553AB12 00:12:43.116 UUID: d149e893-98dc-4f87-8172-5ced1553ab12 00:12:43.116 Thin Provisioning: Not Supported 00:12:43.116 Per-NS Atomic Units: Yes 00:12:43.116 Atomic Boundary Size (Normal): 0 00:12:43.116 Atomic Boundary Size (PFail): 0 00:12:43.116 Atomic Boundary Offset: 0 00:12:43.116 Maximum Single Source Range Length: 65535 00:12:43.116 Maximum Copy Length: 65535 00:12:43.116 Maximum Source Range Count: 1 00:12:43.116 NGUID/EUI64 Never Reused: No 00:12:43.116 Namespace Write Protected: No 00:12:43.116 Number of LBA Formats: 1 00:12:43.116 Current LBA Format: LBA Format #00 00:12:43.116 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:43.116 00:12:43.116 11:39:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:43.116 EAL: No free 2048 kB hugepages reported on node 1 00:12:43.116 [2024-07-15 11:39:11.144636] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:48.387 Initializing NVMe Controllers 00:12:48.387 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:48.387 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:48.387 Initialization complete. Launching workers. 00:12:48.387 ======================================================== 00:12:48.387 Latency(us) 00:12:48.387 Device Information : IOPS MiB/s Average min max 00:12:48.387 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39940.14 156.02 3204.60 890.57 7689.81 00:12:48.387 ======================================================== 00:12:48.387 Total : 39940.14 156.02 3204.60 890.57 7689.81 00:12:48.387 00:12:48.387 [2024-07-15 11:39:16.163680] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:48.387 11:39:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:48.387 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.387 [2024-07-15 11:39:16.387767] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:53.664 Initializing NVMe Controllers 00:12:53.664 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:53.664 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:53.664 Initialization complete. Launching workers. 00:12:53.664 ======================================================== 00:12:53.664 Latency(us) 00:12:53.664 Device Information : IOPS MiB/s Average min max 00:12:53.664 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16045.01 62.68 7982.95 6979.72 15508.14 00:12:53.664 ======================================================== 00:12:53.664 Total : 16045.01 62.68 7982.95 6979.72 15508.14 00:12:53.664 00:12:53.664 [2024-07-15 11:39:21.430954] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:53.664 11:39:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:53.664 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.664 [2024-07-15 11:39:21.655002] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:58.940 [2024-07-15 11:39:26.729102] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:58.940 Initializing NVMe Controllers 00:12:58.940 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:58.940 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:58.940 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:58.940 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:58.940 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:58.940 Initialization complete. Launching workers. 00:12:58.940 Starting thread on core 2 00:12:58.940 Starting thread on core 3 00:12:58.940 Starting thread on core 1 00:12:58.940 11:39:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:58.940 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.940 [2024-07-15 11:39:27.031214] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:02.265 [2024-07-15 11:39:30.097094] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:02.265 Initializing NVMe Controllers 00:13:02.265 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:02.265 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:02.265 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:02.265 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:02.265 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:02.265 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:02.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:02.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:02.265 Initialization complete. Launching workers. 00:13:02.265 Starting thread on core 1 with urgent priority queue 00:13:02.265 Starting thread on core 2 with urgent priority queue 00:13:02.265 Starting thread on core 3 with urgent priority queue 00:13:02.265 Starting thread on core 0 with urgent priority queue 00:13:02.265 SPDK bdev Controller (SPDK1 ) core 0: 9430.00 IO/s 10.60 secs/100000 ios 00:13:02.265 SPDK bdev Controller (SPDK1 ) core 1: 8770.33 IO/s 11.40 secs/100000 ios 00:13:02.265 SPDK bdev Controller (SPDK1 ) core 2: 8497.00 IO/s 11.77 secs/100000 ios 00:13:02.265 SPDK bdev Controller (SPDK1 ) core 3: 8481.00 IO/s 11.79 secs/100000 ios 00:13:02.265 ======================================================== 00:13:02.265 00:13:02.265 11:39:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:02.265 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.524 [2024-07-15 11:39:30.392227] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:02.524 Initializing NVMe Controllers 00:13:02.524 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:02.524 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:02.524 Namespace ID: 1 size: 0GB 00:13:02.524 Initialization complete. 00:13:02.524 INFO: using host memory buffer for IO 00:13:02.524 Hello world! 00:13:02.524 [2024-07-15 11:39:30.426584] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:02.524 11:39:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:02.524 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.783 [2024-07-15 11:39:30.709209] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:03.721 Initializing NVMe Controllers 00:13:03.721 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:03.721 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:03.721 Initialization complete. Launching workers. 00:13:03.721 submit (in ns) avg, min, max = 7851.3, 3113.6, 4002580.0 00:13:03.721 complete (in ns) avg, min, max = 20156.0, 1713.6, 5990660.8 00:13:03.721 00:13:03.721 Submit histogram 00:13:03.721 ================ 00:13:03.721 Range in us Cumulative Count 00:13:03.721 3.110 - 3.123: 0.0593% ( 10) 00:13:03.721 3.123 - 3.136: 0.2431% ( 31) 00:13:03.721 3.136 - 3.149: 0.6521% ( 69) 00:13:03.721 3.149 - 3.162: 1.4228% ( 130) 00:13:03.721 3.162 - 3.174: 2.6322% ( 204) 00:13:03.721 3.174 - 3.187: 4.2922% ( 280) 00:13:03.721 3.187 - 3.200: 6.8473% ( 431) 00:13:03.721 3.200 - 3.213: 9.9597% ( 525) 00:13:03.721 3.213 - 3.226: 13.5464% ( 605) 00:13:03.721 3.226 - 3.238: 18.1883% ( 783) 00:13:03.721 3.238 - 3.251: 23.3519% ( 871) 00:13:03.721 3.251 - 3.264: 28.9898% ( 951) 00:13:03.721 3.264 - 3.277: 34.1831% ( 876) 00:13:03.721 3.277 - 3.302: 44.8897% ( 1806) 00:13:03.721 3.302 - 3.328: 55.0866% ( 1720) 00:13:03.721 3.328 - 3.354: 63.6768% ( 1449) 00:13:03.721 3.354 - 3.379: 71.0576% ( 1245) 00:13:03.721 3.379 - 3.405: 78.4622% ( 1249) 00:13:03.721 3.405 - 3.430: 82.8907% ( 747) 00:13:03.721 3.430 - 3.456: 86.1987% ( 558) 00:13:03.721 3.456 - 3.482: 87.6571% ( 246) 00:13:03.721 3.482 - 3.507: 88.4930% ( 141) 00:13:03.721 3.507 - 3.533: 89.5068% ( 171) 00:13:03.721 3.533 - 3.558: 90.9770% ( 248) 00:13:03.721 3.558 - 3.584: 92.6547% ( 283) 00:13:03.721 3.584 - 3.610: 94.2969% ( 277) 00:13:03.721 3.610 - 3.635: 95.8501% ( 262) 00:13:03.721 3.635 - 3.661: 97.0477% ( 202) 00:13:03.721 3.661 - 3.686: 98.0081% ( 162) 00:13:03.721 3.686 - 3.712: 98.7728% ( 129) 00:13:03.721 3.712 - 3.738: 99.1404% ( 62) 00:13:03.721 3.738 - 3.763: 99.3597% ( 37) 00:13:03.721 3.763 - 3.789: 99.4961% ( 23) 00:13:03.721 3.789 - 3.814: 99.5850% ( 15) 00:13:03.721 3.814 - 3.840: 99.6265% ( 7) 00:13:03.721 3.866 - 3.891: 99.6324% ( 1) 00:13:03.721 5.248 - 5.274: 99.6384% ( 1) 00:13:03.721 5.555 - 5.581: 99.6443% ( 1) 00:13:03.721 5.939 - 5.965: 99.6502% ( 1) 00:13:03.721 6.195 - 6.221: 99.6562% ( 1) 00:13:03.721 6.426 - 6.451: 99.6621% ( 1) 00:13:03.721 6.554 - 6.605: 99.6680% ( 1) 00:13:03.721 6.605 - 6.656: 99.6739% ( 1) 00:13:03.721 6.758 - 6.810: 99.6858% ( 2) 00:13:03.721 6.810 - 6.861: 99.6917% ( 1) 00:13:03.721 6.861 - 6.912: 99.6977% ( 1) 00:13:03.721 6.912 - 6.963: 99.7036% ( 1) 00:13:03.721 6.963 - 7.014: 99.7154% ( 2) 00:13:03.721 7.014 - 7.066: 99.7392% ( 4) 00:13:03.721 7.066 - 7.117: 99.7451% ( 1) 00:13:03.721 7.219 - 7.270: 99.7510% ( 1) 00:13:03.721 7.270 - 7.322: 99.7629% ( 2) 00:13:03.721 7.373 - 7.424: 99.7688% ( 1) 00:13:03.721 7.475 - 7.526: 99.7747% ( 1) 00:13:03.721 7.578 - 7.629: 99.7806% ( 1) 00:13:03.721 7.629 - 7.680: 99.7925% ( 2) 00:13:03.721 7.782 - 7.834: 99.7984% ( 1) 00:13:03.721 7.936 - 7.987: 99.8044% ( 1) 00:13:03.721 7.987 - 8.038: 99.8103% ( 1) 00:13:03.721 8.038 - 8.090: 99.8162% ( 1) 00:13:03.721 8.192 - 8.243: 99.8221% ( 1) 00:13:03.721 8.294 - 8.346: 99.8340% ( 2) 00:13:03.721 8.346 - 8.397: 99.8399% ( 1) 00:13:03.721 8.397 - 8.448: 99.8459% ( 1) 00:13:03.721 8.499 - 8.550: 99.8518% ( 1) 00:13:03.721 8.602 - 8.653: 99.8577% ( 1) 00:13:03.721 8.755 - 8.806: 99.8636% ( 1) 00:13:03.721 9.216 - 9.267: 99.8696% ( 1) 00:13:03.721 9.267 - 9.318: 99.8755% ( 1) 00:13:03.721 9.574 - 9.626: 99.8814% ( 1) 00:13:03.721 11.315 - 11.366: 99.8874% ( 1) 00:13:03.721 3984.589 - 4010.803: 100.0000% ( 19) 00:13:03.721 00:13:03.721 Complete histogram 00:13:03.721 ================== 00:13:03.722 Range in us Cumulative Count 00:13:03.722 1.702 - 1.715: 0.0059% ( 1) 00:13:03.722 1.715 - 1.728: 0.2194% ( 36) 00:13:03.722 1.728 - 1.741: 1.2687% ( 177) 00:13:03.722 1.741 - 1.754: 2.2943% ( 173) 00:13:03.722 1.754 - 1.766: 2.8871% ( 100) 00:13:03.722 1.766 - [2024-07-15 11:39:31.727549] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:03.722 1.779: 15.4968% ( 2127) 00:13:03.722 1.779 - 1.792: 66.7003% ( 8637) 00:13:03.722 1.792 - 1.805: 91.1015% ( 4116) 00:13:03.722 1.805 - 1.818: 95.5063% ( 743) 00:13:03.722 1.818 - 1.830: 96.8461% ( 226) 00:13:03.722 1.830 - 1.843: 97.2314% ( 65) 00:13:03.722 1.843 - 1.856: 97.8421% ( 103) 00:13:03.722 1.856 - 1.869: 98.7135% ( 147) 00:13:03.722 1.869 - 1.882: 99.1463% ( 73) 00:13:03.722 1.882 - 1.894: 99.2471% ( 17) 00:13:03.722 1.894 - 1.907: 99.2886% ( 7) 00:13:03.722 1.907 - 1.920: 99.3242% ( 6) 00:13:03.722 1.920 - 1.933: 99.3360% ( 2) 00:13:03.722 1.958 - 1.971: 99.3419% ( 1) 00:13:03.722 2.022 - 2.035: 99.3479% ( 1) 00:13:03.722 4.096 - 4.122: 99.3597% ( 2) 00:13:03.722 4.480 - 4.506: 99.3657% ( 1) 00:13:03.722 4.582 - 4.608: 99.3716% ( 1) 00:13:03.722 4.813 - 4.838: 99.3775% ( 1) 00:13:03.722 4.890 - 4.915: 99.3894% ( 2) 00:13:03.722 5.299 - 5.325: 99.4072% ( 3) 00:13:03.722 5.350 - 5.376: 99.4131% ( 1) 00:13:03.722 5.632 - 5.658: 99.4190% ( 1) 00:13:03.722 5.760 - 5.786: 99.4249% ( 1) 00:13:03.722 5.811 - 5.837: 99.4309% ( 1) 00:13:03.722 5.888 - 5.914: 99.4427% ( 2) 00:13:03.722 5.914 - 5.939: 99.4487% ( 1) 00:13:03.722 6.195 - 6.221: 99.4546% ( 1) 00:13:03.722 6.605 - 6.656: 99.4605% ( 1) 00:13:03.722 6.707 - 6.758: 99.4664% ( 1) 00:13:03.722 6.758 - 6.810: 99.4724% ( 1) 00:13:03.722 6.810 - 6.861: 99.4783% ( 1) 00:13:03.722 6.912 - 6.963: 99.4842% ( 1) 00:13:03.722 7.066 - 7.117: 99.4902% ( 1) 00:13:03.722 7.219 - 7.270: 99.4961% ( 1) 00:13:03.722 8.038 - 8.090: 99.5020% ( 1) 00:13:03.722 8.141 - 8.192: 99.5079% ( 1) 00:13:03.722 8.243 - 8.294: 99.5139% ( 1) 00:13:03.722 8.550 - 8.602: 99.5198% ( 1) 00:13:03.722 8.704 - 8.755: 99.5257% ( 1) 00:13:03.722 8.806 - 8.858: 99.5317% ( 1) 00:13:03.722 10.445 - 10.496: 99.5376% ( 1) 00:13:03.722 16.179 - 16.282: 99.5435% ( 1) 00:13:03.722 3984.589 - 4010.803: 99.9941% ( 76) 00:13:03.722 5976.883 - 6003.098: 100.0000% ( 1) 00:13:03.722 00:13:03.722 11:39:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:03.722 11:39:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:03.722 11:39:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:03.722 11:39:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:03.722 11:39:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:03.981 [ 00:13:03.981 { 00:13:03.981 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:03.981 "subtype": "Discovery", 00:13:03.981 "listen_addresses": [], 00:13:03.981 "allow_any_host": true, 00:13:03.981 "hosts": [] 00:13:03.981 }, 00:13:03.981 { 00:13:03.981 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:03.981 "subtype": "NVMe", 00:13:03.981 "listen_addresses": [ 00:13:03.981 { 00:13:03.981 "trtype": "VFIOUSER", 00:13:03.981 "adrfam": "IPv4", 00:13:03.981 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:03.981 "trsvcid": "0" 00:13:03.981 } 00:13:03.981 ], 00:13:03.981 "allow_any_host": true, 00:13:03.981 "hosts": [], 00:13:03.981 "serial_number": "SPDK1", 00:13:03.981 "model_number": "SPDK bdev Controller", 00:13:03.981 "max_namespaces": 32, 00:13:03.981 "min_cntlid": 1, 00:13:03.981 "max_cntlid": 65519, 00:13:03.981 "namespaces": [ 00:13:03.981 { 00:13:03.981 "nsid": 1, 00:13:03.981 "bdev_name": "Malloc1", 00:13:03.981 "name": "Malloc1", 00:13:03.981 "nguid": "D149E89398DC4F8781725CED1553AB12", 00:13:03.982 "uuid": "d149e893-98dc-4f87-8172-5ced1553ab12" 00:13:03.982 } 00:13:03.982 ] 00:13:03.982 }, 00:13:03.982 { 00:13:03.982 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:03.982 "subtype": "NVMe", 00:13:03.982 "listen_addresses": [ 00:13:03.982 { 00:13:03.982 "trtype": "VFIOUSER", 00:13:03.982 "adrfam": "IPv4", 00:13:03.982 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:03.982 "trsvcid": "0" 00:13:03.982 } 00:13:03.982 ], 00:13:03.982 "allow_any_host": true, 00:13:03.982 "hosts": [], 00:13:03.982 "serial_number": "SPDK2", 00:13:03.982 "model_number": "SPDK bdev Controller", 00:13:03.982 "max_namespaces": 32, 00:13:03.982 "min_cntlid": 1, 00:13:03.982 "max_cntlid": 65519, 00:13:03.982 "namespaces": [ 00:13:03.982 { 00:13:03.982 "nsid": 1, 00:13:03.982 "bdev_name": "Malloc2", 00:13:03.982 "name": "Malloc2", 00:13:03.982 "nguid": "CBF551207F55475C9BB663A1D4E416B6", 00:13:03.982 "uuid": "cbf55120-7f55-475c-9bb6-63a1d4e416b6" 00:13:03.982 } 00:13:03.982 ] 00:13:03.982 } 00:13:03.982 ] 00:13:03.982 11:39:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:03.982 11:39:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:03.982 11:39:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1893395 00:13:03.982 11:39:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:03.982 11:39:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:03.982 11:39:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:03.982 11:39:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:03.982 11:39:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:03.982 11:39:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:03.982 11:39:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:03.982 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.241 [2024-07-15 11:39:32.114305] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:04.241 Malloc3 00:13:04.241 11:39:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:04.241 [2024-07-15 11:39:32.294581] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:04.241 11:39:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:04.241 Asynchronous Event Request test 00:13:04.241 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:04.241 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:04.241 Registering asynchronous event callbacks... 00:13:04.241 Starting namespace attribute notice tests for all controllers... 00:13:04.241 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:04.241 aer_cb - Changed Namespace 00:13:04.241 Cleaning up... 00:13:04.502 [ 00:13:04.502 { 00:13:04.502 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:04.502 "subtype": "Discovery", 00:13:04.502 "listen_addresses": [], 00:13:04.502 "allow_any_host": true, 00:13:04.502 "hosts": [] 00:13:04.502 }, 00:13:04.502 { 00:13:04.502 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:04.502 "subtype": "NVMe", 00:13:04.502 "listen_addresses": [ 00:13:04.502 { 00:13:04.502 "trtype": "VFIOUSER", 00:13:04.502 "adrfam": "IPv4", 00:13:04.502 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:04.502 "trsvcid": "0" 00:13:04.502 } 00:13:04.502 ], 00:13:04.502 "allow_any_host": true, 00:13:04.502 "hosts": [], 00:13:04.502 "serial_number": "SPDK1", 00:13:04.502 "model_number": "SPDK bdev Controller", 00:13:04.502 "max_namespaces": 32, 00:13:04.502 "min_cntlid": 1, 00:13:04.502 "max_cntlid": 65519, 00:13:04.502 "namespaces": [ 00:13:04.502 { 00:13:04.502 "nsid": 1, 00:13:04.502 "bdev_name": "Malloc1", 00:13:04.502 "name": "Malloc1", 00:13:04.502 "nguid": "D149E89398DC4F8781725CED1553AB12", 00:13:04.502 "uuid": "d149e893-98dc-4f87-8172-5ced1553ab12" 00:13:04.502 }, 00:13:04.502 { 00:13:04.502 "nsid": 2, 00:13:04.502 "bdev_name": "Malloc3", 00:13:04.502 "name": "Malloc3", 00:13:04.502 "nguid": "32D32D486F5F4C4DA9D8750F90B439C5", 00:13:04.502 "uuid": "32d32d48-6f5f-4c4d-a9d8-750f90b439c5" 00:13:04.502 } 00:13:04.502 ] 00:13:04.502 }, 00:13:04.502 { 00:13:04.502 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:04.502 "subtype": "NVMe", 00:13:04.502 "listen_addresses": [ 00:13:04.502 { 00:13:04.502 "trtype": "VFIOUSER", 00:13:04.502 "adrfam": "IPv4", 00:13:04.502 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:04.502 "trsvcid": "0" 00:13:04.502 } 00:13:04.502 ], 00:13:04.502 "allow_any_host": true, 00:13:04.502 "hosts": [], 00:13:04.502 "serial_number": "SPDK2", 00:13:04.502 "model_number": "SPDK bdev Controller", 00:13:04.502 "max_namespaces": 32, 00:13:04.502 "min_cntlid": 1, 00:13:04.502 "max_cntlid": 65519, 00:13:04.502 "namespaces": [ 00:13:04.502 { 00:13:04.502 "nsid": 1, 00:13:04.502 "bdev_name": "Malloc2", 00:13:04.502 "name": "Malloc2", 00:13:04.502 "nguid": "CBF551207F55475C9BB663A1D4E416B6", 00:13:04.502 "uuid": "cbf55120-7f55-475c-9bb6-63a1d4e416b6" 00:13:04.502 } 00:13:04.502 ] 00:13:04.502 } 00:13:04.502 ] 00:13:04.502 11:39:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1893395 00:13:04.502 11:39:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:04.502 11:39:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:04.502 11:39:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:04.502 11:39:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:04.502 [2024-07-15 11:39:32.516845] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:13:04.502 [2024-07-15 11:39:32.516874] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1893414 ] 00:13:04.502 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.502 [2024-07-15 11:39:32.545648] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:04.502 [2024-07-15 11:39:32.556395] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:04.502 [2024-07-15 11:39:32.556418] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8b2dbe6000 00:13:04.502 [2024-07-15 11:39:32.557397] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:04.502 [2024-07-15 11:39:32.558406] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:04.502 [2024-07-15 11:39:32.559407] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:04.502 [2024-07-15 11:39:32.560409] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:04.502 [2024-07-15 11:39:32.561413] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:04.502 [2024-07-15 11:39:32.562422] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:04.502 [2024-07-15 11:39:32.563427] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:04.502 [2024-07-15 11:39:32.564434] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:04.502 [2024-07-15 11:39:32.565440] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:04.502 [2024-07-15 11:39:32.565451] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8b2dbdb000 00:13:04.502 [2024-07-15 11:39:32.566343] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:04.502 [2024-07-15 11:39:32.576968] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:04.502 [2024-07-15 11:39:32.576995] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:04.502 [2024-07-15 11:39:32.582070] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:04.502 [2024-07-15 11:39:32.582111] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:04.502 [2024-07-15 11:39:32.582180] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:04.502 [2024-07-15 11:39:32.582199] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:04.502 [2024-07-15 11:39:32.582207] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:04.502 [2024-07-15 11:39:32.583071] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:04.502 [2024-07-15 11:39:32.583082] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:04.502 [2024-07-15 11:39:32.583091] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:04.502 [2024-07-15 11:39:32.584077] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:04.502 [2024-07-15 11:39:32.584087] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:04.502 [2024-07-15 11:39:32.584096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:04.502 [2024-07-15 11:39:32.585083] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:04.502 [2024-07-15 11:39:32.585094] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:04.502 [2024-07-15 11:39:32.586086] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:04.502 [2024-07-15 11:39:32.586097] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:04.502 [2024-07-15 11:39:32.586103] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:04.503 [2024-07-15 11:39:32.586112] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:04.503 [2024-07-15 11:39:32.586218] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:04.503 [2024-07-15 11:39:32.586225] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:04.503 [2024-07-15 11:39:32.586231] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:04.503 [2024-07-15 11:39:32.587091] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:04.503 [2024-07-15 11:39:32.588094] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:04.503 [2024-07-15 11:39:32.589104] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:04.503 [2024-07-15 11:39:32.590106] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:04.503 [2024-07-15 11:39:32.590149] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:04.503 [2024-07-15 11:39:32.591128] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:04.503 [2024-07-15 11:39:32.591141] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:04.503 [2024-07-15 11:39:32.591147] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:04.503 [2024-07-15 11:39:32.591166] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:04.503 [2024-07-15 11:39:32.591175] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:04.503 [2024-07-15 11:39:32.591189] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:04.503 [2024-07-15 11:39:32.591195] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:04.503 [2024-07-15 11:39:32.591209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:04.503 [2024-07-15 11:39:32.597843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:04.503 [2024-07-15 11:39:32.597857] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:04.503 [2024-07-15 11:39:32.597866] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:04.503 [2024-07-15 11:39:32.597872] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:04.503 [2024-07-15 11:39:32.597878] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:04.503 [2024-07-15 11:39:32.597885] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:04.503 [2024-07-15 11:39:32.597893] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:04.503 [2024-07-15 11:39:32.597899] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:04.503 [2024-07-15 11:39:32.597908] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:04.503 [2024-07-15 11:39:32.597918] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:04.503 [2024-07-15 11:39:32.605841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:04.503 [2024-07-15 11:39:32.605858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.503 [2024-07-15 11:39:32.605869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.503 [2024-07-15 11:39:32.605879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.503 [2024-07-15 11:39:32.605888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.503 [2024-07-15 11:39:32.605894] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:04.503 [2024-07-15 11:39:32.605904] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:04.503 [2024-07-15 11:39:32.605916] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:04.764 [2024-07-15 11:39:32.613842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:04.764 [2024-07-15 11:39:32.613852] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:04.764 [2024-07-15 11:39:32.613859] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:04.764 [2024-07-15 11:39:32.613867] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:04.764 [2024-07-15 11:39:32.613874] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:04.764 [2024-07-15 11:39:32.613884] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:04.764 [2024-07-15 11:39:32.621842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:04.764 [2024-07-15 11:39:32.621895] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:04.764 [2024-07-15 11:39:32.621904] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:04.764 [2024-07-15 11:39:32.621913] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:04.764 [2024-07-15 11:39:32.621919] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:04.764 [2024-07-15 11:39:32.621927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:04.764 [2024-07-15 11:39:32.629842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:04.764 [2024-07-15 11:39:32.629855] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:04.764 [2024-07-15 11:39:32.629866] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:04.764 [2024-07-15 11:39:32.629875] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:04.764 [2024-07-15 11:39:32.629883] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:04.764 [2024-07-15 11:39:32.629889] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:04.764 [2024-07-15 11:39:32.629896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:04.764 [2024-07-15 11:39:32.637840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:04.764 [2024-07-15 11:39:32.637856] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:04.764 [2024-07-15 11:39:32.637865] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:04.764 [2024-07-15 11:39:32.637873] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:04.764 [2024-07-15 11:39:32.637878] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:04.764 [2024-07-15 11:39:32.637886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:04.764 [2024-07-15 11:39:32.645841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:04.764 [2024-07-15 11:39:32.645852] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:04.764 [2024-07-15 11:39:32.645861] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:04.764 [2024-07-15 11:39:32.645872] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:04.764 [2024-07-15 11:39:32.645879] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:13:04.764 [2024-07-15 11:39:32.645886] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:04.764 [2024-07-15 11:39:32.645892] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:04.764 [2024-07-15 11:39:32.645898] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:04.764 [2024-07-15 11:39:32.645904] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:04.764 [2024-07-15 11:39:32.645911] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:04.764 [2024-07-15 11:39:32.645929] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:04.764 [2024-07-15 11:39:32.653840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:04.764 [2024-07-15 11:39:32.653856] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:04.764 [2024-07-15 11:39:32.661840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:04.764 [2024-07-15 11:39:32.661855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:04.764 [2024-07-15 11:39:32.669839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:04.764 [2024-07-15 11:39:32.669855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:04.764 [2024-07-15 11:39:32.677841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:04.764 [2024-07-15 11:39:32.677860] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:04.764 [2024-07-15 11:39:32.677866] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:04.764 [2024-07-15 11:39:32.677871] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:04.764 [2024-07-15 11:39:32.677876] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:04.764 [2024-07-15 11:39:32.677883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:04.764 [2024-07-15 11:39:32.677891] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:04.764 [2024-07-15 11:39:32.677897] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:04.764 [2024-07-15 11:39:32.677903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:04.764 [2024-07-15 11:39:32.677911] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:04.764 [2024-07-15 11:39:32.677917] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:04.764 [2024-07-15 11:39:32.677924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:04.764 [2024-07-15 11:39:32.677932] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:04.764 [2024-07-15 11:39:32.677938] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:04.764 [2024-07-15 11:39:32.677944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:04.764 [2024-07-15 11:39:32.685840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:04.764 [2024-07-15 11:39:32.685856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:04.764 [2024-07-15 11:39:32.685868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:04.764 [2024-07-15 11:39:32.685877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:04.764 ===================================================== 00:13:04.764 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:04.764 ===================================================== 00:13:04.764 Controller Capabilities/Features 00:13:04.764 ================================ 00:13:04.764 Vendor ID: 4e58 00:13:04.764 Subsystem Vendor ID: 4e58 00:13:04.764 Serial Number: SPDK2 00:13:04.765 Model Number: SPDK bdev Controller 00:13:04.765 Firmware Version: 24.09 00:13:04.765 Recommended Arb Burst: 6 00:13:04.765 IEEE OUI Identifier: 8d 6b 50 00:13:04.765 Multi-path I/O 00:13:04.765 May have multiple subsystem ports: Yes 00:13:04.765 May have multiple controllers: Yes 00:13:04.765 Associated with SR-IOV VF: No 00:13:04.765 Max Data Transfer Size: 131072 00:13:04.765 Max Number of Namespaces: 32 00:13:04.765 Max Number of I/O Queues: 127 00:13:04.765 NVMe Specification Version (VS): 1.3 00:13:04.765 NVMe Specification Version (Identify): 1.3 00:13:04.765 Maximum Queue Entries: 256 00:13:04.765 Contiguous Queues Required: Yes 00:13:04.765 Arbitration Mechanisms Supported 00:13:04.765 Weighted Round Robin: Not Supported 00:13:04.765 Vendor Specific: Not Supported 00:13:04.765 Reset Timeout: 15000 ms 00:13:04.765 Doorbell Stride: 4 bytes 00:13:04.765 NVM Subsystem Reset: Not Supported 00:13:04.765 Command Sets Supported 00:13:04.765 NVM Command Set: Supported 00:13:04.765 Boot Partition: Not Supported 00:13:04.765 Memory Page Size Minimum: 4096 bytes 00:13:04.765 Memory Page Size Maximum: 4096 bytes 00:13:04.765 Persistent Memory Region: Not Supported 00:13:04.765 Optional Asynchronous Events Supported 00:13:04.765 Namespace Attribute Notices: Supported 00:13:04.765 Firmware Activation Notices: Not Supported 00:13:04.765 ANA Change Notices: Not Supported 00:13:04.765 PLE Aggregate Log Change Notices: Not Supported 00:13:04.765 LBA Status Info Alert Notices: Not Supported 00:13:04.765 EGE Aggregate Log Change Notices: Not Supported 00:13:04.765 Normal NVM Subsystem Shutdown event: Not Supported 00:13:04.765 Zone Descriptor Change Notices: Not Supported 00:13:04.765 Discovery Log Change Notices: Not Supported 00:13:04.765 Controller Attributes 00:13:04.765 128-bit Host Identifier: Supported 00:13:04.765 Non-Operational Permissive Mode: Not Supported 00:13:04.765 NVM Sets: Not Supported 00:13:04.765 Read Recovery Levels: Not Supported 00:13:04.765 Endurance Groups: Not Supported 00:13:04.765 Predictable Latency Mode: Not Supported 00:13:04.765 Traffic Based Keep ALive: Not Supported 00:13:04.765 Namespace Granularity: Not Supported 00:13:04.765 SQ Associations: Not Supported 00:13:04.765 UUID List: Not Supported 00:13:04.765 Multi-Domain Subsystem: Not Supported 00:13:04.765 Fixed Capacity Management: Not Supported 00:13:04.765 Variable Capacity Management: Not Supported 00:13:04.765 Delete Endurance Group: Not Supported 00:13:04.765 Delete NVM Set: Not Supported 00:13:04.765 Extended LBA Formats Supported: Not Supported 00:13:04.765 Flexible Data Placement Supported: Not Supported 00:13:04.765 00:13:04.765 Controller Memory Buffer Support 00:13:04.765 ================================ 00:13:04.765 Supported: No 00:13:04.765 00:13:04.765 Persistent Memory Region Support 00:13:04.765 ================================ 00:13:04.765 Supported: No 00:13:04.765 00:13:04.765 Admin Command Set Attributes 00:13:04.765 ============================ 00:13:04.765 Security Send/Receive: Not Supported 00:13:04.765 Format NVM: Not Supported 00:13:04.765 Firmware Activate/Download: Not Supported 00:13:04.765 Namespace Management: Not Supported 00:13:04.765 Device Self-Test: Not Supported 00:13:04.765 Directives: Not Supported 00:13:04.765 NVMe-MI: Not Supported 00:13:04.765 Virtualization Management: Not Supported 00:13:04.765 Doorbell Buffer Config: Not Supported 00:13:04.765 Get LBA Status Capability: Not Supported 00:13:04.765 Command & Feature Lockdown Capability: Not Supported 00:13:04.765 Abort Command Limit: 4 00:13:04.765 Async Event Request Limit: 4 00:13:04.765 Number of Firmware Slots: N/A 00:13:04.765 Firmware Slot 1 Read-Only: N/A 00:13:04.765 Firmware Activation Without Reset: N/A 00:13:04.765 Multiple Update Detection Support: N/A 00:13:04.765 Firmware Update Granularity: No Information Provided 00:13:04.765 Per-Namespace SMART Log: No 00:13:04.765 Asymmetric Namespace Access Log Page: Not Supported 00:13:04.765 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:04.765 Command Effects Log Page: Supported 00:13:04.765 Get Log Page Extended Data: Supported 00:13:04.765 Telemetry Log Pages: Not Supported 00:13:04.765 Persistent Event Log Pages: Not Supported 00:13:04.765 Supported Log Pages Log Page: May Support 00:13:04.765 Commands Supported & Effects Log Page: Not Supported 00:13:04.765 Feature Identifiers & Effects Log Page:May Support 00:13:04.765 NVMe-MI Commands & Effects Log Page: May Support 00:13:04.765 Data Area 4 for Telemetry Log: Not Supported 00:13:04.765 Error Log Page Entries Supported: 128 00:13:04.765 Keep Alive: Supported 00:13:04.765 Keep Alive Granularity: 10000 ms 00:13:04.765 00:13:04.765 NVM Command Set Attributes 00:13:04.765 ========================== 00:13:04.765 Submission Queue Entry Size 00:13:04.765 Max: 64 00:13:04.765 Min: 64 00:13:04.765 Completion Queue Entry Size 00:13:04.765 Max: 16 00:13:04.765 Min: 16 00:13:04.765 Number of Namespaces: 32 00:13:04.765 Compare Command: Supported 00:13:04.765 Write Uncorrectable Command: Not Supported 00:13:04.765 Dataset Management Command: Supported 00:13:04.765 Write Zeroes Command: Supported 00:13:04.765 Set Features Save Field: Not Supported 00:13:04.765 Reservations: Not Supported 00:13:04.765 Timestamp: Not Supported 00:13:04.765 Copy: Supported 00:13:04.765 Volatile Write Cache: Present 00:13:04.765 Atomic Write Unit (Normal): 1 00:13:04.765 Atomic Write Unit (PFail): 1 00:13:04.765 Atomic Compare & Write Unit: 1 00:13:04.765 Fused Compare & Write: Supported 00:13:04.765 Scatter-Gather List 00:13:04.765 SGL Command Set: Supported (Dword aligned) 00:13:04.765 SGL Keyed: Not Supported 00:13:04.765 SGL Bit Bucket Descriptor: Not Supported 00:13:04.765 SGL Metadata Pointer: Not Supported 00:13:04.765 Oversized SGL: Not Supported 00:13:04.765 SGL Metadata Address: Not Supported 00:13:04.765 SGL Offset: Not Supported 00:13:04.765 Transport SGL Data Block: Not Supported 00:13:04.765 Replay Protected Memory Block: Not Supported 00:13:04.765 00:13:04.765 Firmware Slot Information 00:13:04.765 ========================= 00:13:04.765 Active slot: 1 00:13:04.765 Slot 1 Firmware Revision: 24.09 00:13:04.765 00:13:04.765 00:13:04.765 Commands Supported and Effects 00:13:04.765 ============================== 00:13:04.765 Admin Commands 00:13:04.765 -------------- 00:13:04.765 Get Log Page (02h): Supported 00:13:04.765 Identify (06h): Supported 00:13:04.765 Abort (08h): Supported 00:13:04.765 Set Features (09h): Supported 00:13:04.765 Get Features (0Ah): Supported 00:13:04.765 Asynchronous Event Request (0Ch): Supported 00:13:04.765 Keep Alive (18h): Supported 00:13:04.765 I/O Commands 00:13:04.765 ------------ 00:13:04.765 Flush (00h): Supported LBA-Change 00:13:04.765 Write (01h): Supported LBA-Change 00:13:04.765 Read (02h): Supported 00:13:04.765 Compare (05h): Supported 00:13:04.765 Write Zeroes (08h): Supported LBA-Change 00:13:04.765 Dataset Management (09h): Supported LBA-Change 00:13:04.765 Copy (19h): Supported LBA-Change 00:13:04.765 00:13:04.765 Error Log 00:13:04.765 ========= 00:13:04.765 00:13:04.765 Arbitration 00:13:04.765 =========== 00:13:04.765 Arbitration Burst: 1 00:13:04.765 00:13:04.765 Power Management 00:13:04.765 ================ 00:13:04.765 Number of Power States: 1 00:13:04.765 Current Power State: Power State #0 00:13:04.765 Power State #0: 00:13:04.765 Max Power: 0.00 W 00:13:04.765 Non-Operational State: Operational 00:13:04.765 Entry Latency: Not Reported 00:13:04.765 Exit Latency: Not Reported 00:13:04.765 Relative Read Throughput: 0 00:13:04.765 Relative Read Latency: 0 00:13:04.765 Relative Write Throughput: 0 00:13:04.765 Relative Write Latency: 0 00:13:04.765 Idle Power: Not Reported 00:13:04.765 Active Power: Not Reported 00:13:04.765 Non-Operational Permissive Mode: Not Supported 00:13:04.765 00:13:04.765 Health Information 00:13:04.765 ================== 00:13:04.765 Critical Warnings: 00:13:04.765 Available Spare Space: OK 00:13:04.765 Temperature: OK 00:13:04.765 Device Reliability: OK 00:13:04.765 Read Only: No 00:13:04.765 Volatile Memory Backup: OK 00:13:04.765 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:04.765 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:04.765 Available Spare: 0% 00:13:04.765 Available Sp[2024-07-15 11:39:32.685968] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:04.765 [2024-07-15 11:39:32.693842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:04.765 [2024-07-15 11:39:32.693877] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:04.765 [2024-07-15 11:39:32.693888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.765 [2024-07-15 11:39:32.693896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.765 [2024-07-15 11:39:32.693905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.765 [2024-07-15 11:39:32.693913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.765 [2024-07-15 11:39:32.693963] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:04.766 [2024-07-15 11:39:32.693975] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:04.766 [2024-07-15 11:39:32.694970] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:04.766 [2024-07-15 11:39:32.695024] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:04.766 [2024-07-15 11:39:32.695033] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:04.766 [2024-07-15 11:39:32.695972] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:04.766 [2024-07-15 11:39:32.695985] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:04.766 [2024-07-15 11:39:32.696031] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:04.766 [2024-07-15 11:39:32.698840] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:04.766 are Threshold: 0% 00:13:04.766 Life Percentage Used: 0% 00:13:04.766 Data Units Read: 0 00:13:04.766 Data Units Written: 0 00:13:04.766 Host Read Commands: 0 00:13:04.766 Host Write Commands: 0 00:13:04.766 Controller Busy Time: 0 minutes 00:13:04.766 Power Cycles: 0 00:13:04.766 Power On Hours: 0 hours 00:13:04.766 Unsafe Shutdowns: 0 00:13:04.766 Unrecoverable Media Errors: 0 00:13:04.766 Lifetime Error Log Entries: 0 00:13:04.766 Warning Temperature Time: 0 minutes 00:13:04.766 Critical Temperature Time: 0 minutes 00:13:04.766 00:13:04.766 Number of Queues 00:13:04.766 ================ 00:13:04.766 Number of I/O Submission Queues: 127 00:13:04.766 Number of I/O Completion Queues: 127 00:13:04.766 00:13:04.766 Active Namespaces 00:13:04.766 ================= 00:13:04.766 Namespace ID:1 00:13:04.766 Error Recovery Timeout: Unlimited 00:13:04.766 Command Set Identifier: NVM (00h) 00:13:04.766 Deallocate: Supported 00:13:04.766 Deallocated/Unwritten Error: Not Supported 00:13:04.766 Deallocated Read Value: Unknown 00:13:04.766 Deallocate in Write Zeroes: Not Supported 00:13:04.766 Deallocated Guard Field: 0xFFFF 00:13:04.766 Flush: Supported 00:13:04.766 Reservation: Supported 00:13:04.766 Namespace Sharing Capabilities: Multiple Controllers 00:13:04.766 Size (in LBAs): 131072 (0GiB) 00:13:04.766 Capacity (in LBAs): 131072 (0GiB) 00:13:04.766 Utilization (in LBAs): 131072 (0GiB) 00:13:04.766 NGUID: CBF551207F55475C9BB663A1D4E416B6 00:13:04.766 UUID: cbf55120-7f55-475c-9bb6-63a1d4e416b6 00:13:04.766 Thin Provisioning: Not Supported 00:13:04.766 Per-NS Atomic Units: Yes 00:13:04.766 Atomic Boundary Size (Normal): 0 00:13:04.766 Atomic Boundary Size (PFail): 0 00:13:04.766 Atomic Boundary Offset: 0 00:13:04.766 Maximum Single Source Range Length: 65535 00:13:04.766 Maximum Copy Length: 65535 00:13:04.766 Maximum Source Range Count: 1 00:13:04.766 NGUID/EUI64 Never Reused: No 00:13:04.766 Namespace Write Protected: No 00:13:04.766 Number of LBA Formats: 1 00:13:04.766 Current LBA Format: LBA Format #00 00:13:04.766 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:04.766 00:13:04.766 11:39:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:04.766 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.025 [2024-07-15 11:39:32.916033] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:10.332 Initializing NVMe Controllers 00:13:10.332 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:10.332 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:10.332 Initialization complete. Launching workers. 00:13:10.332 ======================================================== 00:13:10.332 Latency(us) 00:13:10.332 Device Information : IOPS MiB/s Average min max 00:13:10.332 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39947.24 156.04 3204.06 916.23 7350.76 00:13:10.332 ======================================================== 00:13:10.332 Total : 39947.24 156.04 3204.06 916.23 7350.76 00:13:10.332 00:13:10.332 [2024-07-15 11:39:38.024082] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:10.332 11:39:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:10.332 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.332 [2024-07-15 11:39:38.245724] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:15.603 Initializing NVMe Controllers 00:13:15.603 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:15.603 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:15.603 Initialization complete. Launching workers. 00:13:15.603 ======================================================== 00:13:15.603 Latency(us) 00:13:15.603 Device Information : IOPS MiB/s Average min max 00:13:15.603 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39958.39 156.09 3204.78 925.87 8215.26 00:13:15.603 ======================================================== 00:13:15.603 Total : 39958.39 156.09 3204.78 925.87 8215.26 00:13:15.603 00:13:15.603 [2024-07-15 11:39:43.269136] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:15.603 11:39:43 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:15.603 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.604 [2024-07-15 11:39:43.479282] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:20.878 [2024-07-15 11:39:48.614940] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:20.878 Initializing NVMe Controllers 00:13:20.878 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:20.878 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:20.878 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:20.878 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:20.878 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:20.878 Initialization complete. Launching workers. 00:13:20.878 Starting thread on core 2 00:13:20.878 Starting thread on core 3 00:13:20.878 Starting thread on core 1 00:13:20.878 11:39:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:20.878 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.878 [2024-07-15 11:39:48.928267] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:25.072 [2024-07-15 11:39:52.576071] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:25.072 Initializing NVMe Controllers 00:13:25.072 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:25.072 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:25.072 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:25.072 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:25.072 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:25.072 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:25.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:25.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:25.072 Initialization complete. Launching workers. 00:13:25.072 Starting thread on core 1 with urgent priority queue 00:13:25.072 Starting thread on core 2 with urgent priority queue 00:13:25.072 Starting thread on core 3 with urgent priority queue 00:13:25.072 Starting thread on core 0 with urgent priority queue 00:13:25.072 SPDK bdev Controller (SPDK2 ) core 0: 1564.67 IO/s 63.91 secs/100000 ios 00:13:25.072 SPDK bdev Controller (SPDK2 ) core 1: 1522.67 IO/s 65.67 secs/100000 ios 00:13:25.072 SPDK bdev Controller (SPDK2 ) core 2: 2458.67 IO/s 40.67 secs/100000 ios 00:13:25.072 SPDK bdev Controller (SPDK2 ) core 3: 1232.33 IO/s 81.15 secs/100000 ios 00:13:25.072 ======================================================== 00:13:25.072 00:13:25.072 11:39:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:25.072 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.072 [2024-07-15 11:39:52.872288] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:25.072 Initializing NVMe Controllers 00:13:25.072 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:25.072 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:25.072 Namespace ID: 1 size: 0GB 00:13:25.072 Initialization complete. 00:13:25.072 INFO: using host memory buffer for IO 00:13:25.072 Hello world! 00:13:25.072 [2024-07-15 11:39:52.884377] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:25.072 11:39:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:25.072 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.072 [2024-07-15 11:39:53.173453] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:26.449 Initializing NVMe Controllers 00:13:26.449 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:26.449 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:26.449 Initialization complete. Launching workers. 00:13:26.449 submit (in ns) avg, min, max = 6339.2, 3091.2, 3999625.6 00:13:26.449 complete (in ns) avg, min, max = 20810.9, 1716.0, 3998745.6 00:13:26.449 00:13:26.449 Submit histogram 00:13:26.449 ================ 00:13:26.449 Range in us Cumulative Count 00:13:26.449 3.085 - 3.098: 0.0058% ( 1) 00:13:26.449 3.098 - 3.110: 0.0290% ( 4) 00:13:26.449 3.110 - 3.123: 0.0870% ( 10) 00:13:26.449 3.123 - 3.136: 0.1508% ( 11) 00:13:26.449 3.136 - 3.149: 0.4756% ( 56) 00:13:26.449 3.149 - 3.162: 1.6647% ( 205) 00:13:26.449 3.162 - 3.174: 3.4339% ( 305) 00:13:26.449 3.174 - 3.187: 6.0151% ( 445) 00:13:26.449 3.187 - 3.200: 10.5220% ( 777) 00:13:26.449 3.200 - 3.213: 15.6206% ( 879) 00:13:26.449 3.213 - 3.226: 21.4385% ( 1003) 00:13:26.449 3.226 - 3.238: 27.0824% ( 973) 00:13:26.449 3.238 - 3.251: 33.7993% ( 1158) 00:13:26.449 3.251 - 3.264: 39.7274% ( 1022) 00:13:26.449 3.264 - 3.277: 46.0557% ( 1091) 00:13:26.449 3.277 - 3.302: 57.1694% ( 1916) 00:13:26.449 3.302 - 3.328: 63.4745% ( 1087) 00:13:26.449 3.328 - 3.354: 70.1566% ( 1152) 00:13:26.449 3.354 - 3.379: 75.4234% ( 908) 00:13:26.449 3.379 - 3.405: 80.6787% ( 906) 00:13:26.449 3.405 - 3.430: 86.7807% ( 1052) 00:13:26.449 3.430 - 3.456: 88.6601% ( 324) 00:13:26.449 3.456 - 3.482: 89.2053% ( 94) 00:13:26.449 3.482 - 3.507: 89.8782% ( 116) 00:13:26.449 3.507 - 3.533: 91.1717% ( 223) 00:13:26.449 3.533 - 3.558: 92.8480% ( 289) 00:13:26.449 3.558 - 3.584: 94.5592% ( 295) 00:13:26.449 3.584 - 3.610: 95.8121% ( 216) 00:13:26.449 3.610 - 3.635: 96.8794% ( 184) 00:13:26.449 3.635 - 3.661: 97.8712% ( 171) 00:13:26.449 3.661 - 3.686: 98.6311% ( 131) 00:13:26.449 3.686 - 3.712: 99.1299% ( 86) 00:13:26.449 3.712 - 3.738: 99.3910% ( 45) 00:13:26.449 3.738 - 3.763: 99.5882% ( 34) 00:13:26.449 3.763 - 3.789: 99.6752% ( 15) 00:13:26.449 3.789 - 3.814: 99.6926% ( 3) 00:13:26.449 3.814 - 3.840: 99.6984% ( 1) 00:13:26.449 5.709 - 5.734: 99.7042% ( 1) 00:13:26.449 5.786 - 5.811: 99.7100% ( 1) 00:13:26.449 5.990 - 6.016: 99.7158% ( 1) 00:13:26.449 6.016 - 6.042: 99.7274% ( 2) 00:13:26.449 6.221 - 6.246: 99.7332% ( 1) 00:13:26.449 6.298 - 6.323: 99.7390% ( 1) 00:13:26.449 6.374 - 6.400: 99.7564% ( 3) 00:13:26.449 6.400 - 6.426: 99.7622% ( 1) 00:13:26.449 6.554 - 6.605: 99.7680% ( 1) 00:13:26.449 6.605 - 6.656: 99.7796% ( 2) 00:13:26.449 6.656 - 6.707: 99.7854% ( 1) 00:13:26.449 6.707 - 6.758: 99.7970% ( 2) 00:13:26.449 6.758 - 6.810: 99.8028% ( 1) 00:13:26.449 6.810 - 6.861: 99.8086% ( 1) 00:13:26.449 6.861 - 6.912: 99.8202% ( 2) 00:13:26.449 7.066 - 7.117: 99.8318% ( 2) 00:13:26.449 7.117 - 7.168: 99.8434% ( 2) 00:13:26.449 7.219 - 7.270: 99.8550% ( 2) 00:13:26.449 7.270 - 7.322: 99.8608% ( 1) 00:13:26.449 7.424 - 7.475: 99.8666% ( 1) 00:13:26.449 7.885 - 7.936: 99.8724% ( 1) 00:13:26.449 7.936 - 7.987: 99.8782% ( 1) 00:13:26.449 8.294 - 8.346: 99.8898% ( 2) 00:13:26.449 8.448 - 8.499: 99.8956% ( 1) 00:13:26.449 8.755 - 8.806: 99.9014% ( 1) 00:13:26.449 8.806 - 8.858: 99.9072% ( 1) 00:13:26.449 10.547 - 10.598: 99.9130% ( 1) 00:13:26.449 10.650 - 10.701: 99.9188% ( 1) 00:13:26.449 10.803 - 10.854: 99.9246% ( 1) 00:13:26.449 3984.589 - 4010.803: 100.0000% ( 13) 00:13:26.449 00:13:26.449 Complete histogram 00:13:26.449 ================== 00:13:26.449 Range in us Cumulative Count 00:13:26.450 1.715 - 1.728: 0.0522% ( 9) 00:13:26.450 1.728 - 1.741: 0.4002% ( 60) 00:13:26.450 1.741 - 1.754: 1.3167% ( 158) 00:13:26.450 1.754 - 1.766: 2.7436% ( 246) 00:13:26.450 1.766 - 1.779: 34.3561% ( 5450) 00:13:26.450 1.779 - 1.792: 81.6763% ( 8158) 00:13:26.450 1.792 - 1.805: 92.3318% ( 1837) 00:13:26.450 1.805 - 1.818: 96.0035% ( 633) 00:13:26.450 1.818 - 1.830: 96.5545% ( 95) 00:13:26.450 1.830 - 1.843: 97.1346% ( 100) 00:13:26.450 1.843 - [2024-07-15 11:39:54.265663] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:26.450 1.856: 98.1439% ( 174) 00:13:26.450 1.856 - 1.869: 98.9153% ( 133) 00:13:26.450 1.869 - 1.882: 99.1473% ( 40) 00:13:26.450 1.882 - 1.894: 99.2227% ( 13) 00:13:26.450 1.894 - 1.907: 99.2401% ( 3) 00:13:26.450 1.907 - 1.920: 99.2459% ( 1) 00:13:26.450 1.933 - 1.946: 99.2517% ( 1) 00:13:26.450 1.946 - 1.958: 99.2575% ( 1) 00:13:26.450 1.958 - 1.971: 99.2691% ( 2) 00:13:26.450 1.984 - 1.997: 99.2749% ( 1) 00:13:26.450 4.122 - 4.147: 99.2807% ( 1) 00:13:26.450 4.198 - 4.224: 99.2923% ( 2) 00:13:26.450 4.224 - 4.250: 99.2981% ( 1) 00:13:26.450 4.506 - 4.531: 99.3039% ( 1) 00:13:26.450 4.608 - 4.634: 99.3097% ( 1) 00:13:26.450 4.762 - 4.787: 99.3155% ( 1) 00:13:26.450 4.890 - 4.915: 99.3213% ( 1) 00:13:26.450 4.992 - 5.018: 99.3271% ( 1) 00:13:26.450 5.094 - 5.120: 99.3387% ( 2) 00:13:26.450 5.197 - 5.222: 99.3503% ( 2) 00:13:26.450 5.222 - 5.248: 99.3619% ( 2) 00:13:26.450 5.376 - 5.402: 99.3735% ( 2) 00:13:26.450 5.402 - 5.427: 99.3794% ( 1) 00:13:26.450 5.530 - 5.555: 99.3852% ( 1) 00:13:26.450 5.555 - 5.581: 99.3968% ( 2) 00:13:26.450 5.632 - 5.658: 99.4200% ( 4) 00:13:26.450 5.658 - 5.683: 99.4316% ( 2) 00:13:26.450 5.734 - 5.760: 99.4374% ( 1) 00:13:26.450 5.760 - 5.786: 99.4432% ( 1) 00:13:26.450 5.914 - 5.939: 99.4490% ( 1) 00:13:26.450 6.144 - 6.170: 99.4548% ( 1) 00:13:26.450 6.554 - 6.605: 99.4606% ( 1) 00:13:26.450 6.861 - 6.912: 99.4664% ( 1) 00:13:26.450 6.963 - 7.014: 99.4722% ( 1) 00:13:26.450 7.066 - 7.117: 99.4838% ( 2) 00:13:26.450 9.677 - 9.728: 99.4896% ( 1) 00:13:26.450 9.728 - 9.779: 99.4954% ( 1) 00:13:26.450 10.445 - 10.496: 99.5012% ( 1) 00:13:26.450 14.234 - 14.336: 99.5070% ( 1) 00:13:26.450 30.106 - 30.310: 99.5128% ( 1) 00:13:26.450 71.680 - 72.090: 99.5186% ( 1) 00:13:26.450 151.552 - 152.371: 99.5244% ( 1) 00:13:26.450 3984.589 - 4010.803: 100.0000% ( 82) 00:13:26.450 00:13:26.450 11:39:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:26.450 11:39:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:26.450 11:39:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:26.450 11:39:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:26.450 11:39:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:26.450 [ 00:13:26.450 { 00:13:26.450 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:26.450 "subtype": "Discovery", 00:13:26.450 "listen_addresses": [], 00:13:26.450 "allow_any_host": true, 00:13:26.450 "hosts": [] 00:13:26.450 }, 00:13:26.450 { 00:13:26.450 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:26.450 "subtype": "NVMe", 00:13:26.450 "listen_addresses": [ 00:13:26.450 { 00:13:26.450 "trtype": "VFIOUSER", 00:13:26.450 "adrfam": "IPv4", 00:13:26.450 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:26.450 "trsvcid": "0" 00:13:26.450 } 00:13:26.450 ], 00:13:26.450 "allow_any_host": true, 00:13:26.450 "hosts": [], 00:13:26.450 "serial_number": "SPDK1", 00:13:26.450 "model_number": "SPDK bdev Controller", 00:13:26.450 "max_namespaces": 32, 00:13:26.450 "min_cntlid": 1, 00:13:26.450 "max_cntlid": 65519, 00:13:26.450 "namespaces": [ 00:13:26.450 { 00:13:26.450 "nsid": 1, 00:13:26.450 "bdev_name": "Malloc1", 00:13:26.450 "name": "Malloc1", 00:13:26.450 "nguid": "D149E89398DC4F8781725CED1553AB12", 00:13:26.450 "uuid": "d149e893-98dc-4f87-8172-5ced1553ab12" 00:13:26.450 }, 00:13:26.450 { 00:13:26.450 "nsid": 2, 00:13:26.450 "bdev_name": "Malloc3", 00:13:26.450 "name": "Malloc3", 00:13:26.450 "nguid": "32D32D486F5F4C4DA9D8750F90B439C5", 00:13:26.450 "uuid": "32d32d48-6f5f-4c4d-a9d8-750f90b439c5" 00:13:26.450 } 00:13:26.450 ] 00:13:26.450 }, 00:13:26.450 { 00:13:26.450 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:26.450 "subtype": "NVMe", 00:13:26.450 "listen_addresses": [ 00:13:26.450 { 00:13:26.450 "trtype": "VFIOUSER", 00:13:26.450 "adrfam": "IPv4", 00:13:26.450 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:26.450 "trsvcid": "0" 00:13:26.450 } 00:13:26.450 ], 00:13:26.450 "allow_any_host": true, 00:13:26.450 "hosts": [], 00:13:26.450 "serial_number": "SPDK2", 00:13:26.450 "model_number": "SPDK bdev Controller", 00:13:26.450 "max_namespaces": 32, 00:13:26.450 "min_cntlid": 1, 00:13:26.450 "max_cntlid": 65519, 00:13:26.450 "namespaces": [ 00:13:26.450 { 00:13:26.450 "nsid": 1, 00:13:26.450 "bdev_name": "Malloc2", 00:13:26.450 "name": "Malloc2", 00:13:26.450 "nguid": "CBF551207F55475C9BB663A1D4E416B6", 00:13:26.450 "uuid": "cbf55120-7f55-475c-9bb6-63a1d4e416b6" 00:13:26.450 } 00:13:26.450 ] 00:13:26.450 } 00:13:26.450 ] 00:13:26.450 11:39:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:26.450 11:39:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1897189 00:13:26.450 11:39:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:26.450 11:39:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:26.450 11:39:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:26.450 11:39:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:26.450 11:39:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:26.450 11:39:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:26.450 11:39:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:26.450 11:39:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:26.710 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.710 [2024-07-15 11:39:54.669258] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:26.710 Malloc4 00:13:26.710 11:39:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:26.969 [2024-07-15 11:39:54.848502] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:26.969 11:39:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:26.969 Asynchronous Event Request test 00:13:26.969 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:26.969 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:26.969 Registering asynchronous event callbacks... 00:13:26.969 Starting namespace attribute notice tests for all controllers... 00:13:26.969 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:26.969 aer_cb - Changed Namespace 00:13:26.969 Cleaning up... 00:13:26.969 [ 00:13:26.969 { 00:13:26.969 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:26.969 "subtype": "Discovery", 00:13:26.969 "listen_addresses": [], 00:13:26.969 "allow_any_host": true, 00:13:26.969 "hosts": [] 00:13:26.969 }, 00:13:26.969 { 00:13:26.969 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:26.969 "subtype": "NVMe", 00:13:26.969 "listen_addresses": [ 00:13:26.969 { 00:13:26.969 "trtype": "VFIOUSER", 00:13:26.969 "adrfam": "IPv4", 00:13:26.969 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:26.969 "trsvcid": "0" 00:13:26.969 } 00:13:26.969 ], 00:13:26.969 "allow_any_host": true, 00:13:26.969 "hosts": [], 00:13:26.969 "serial_number": "SPDK1", 00:13:26.969 "model_number": "SPDK bdev Controller", 00:13:26.969 "max_namespaces": 32, 00:13:26.969 "min_cntlid": 1, 00:13:26.969 "max_cntlid": 65519, 00:13:26.969 "namespaces": [ 00:13:26.969 { 00:13:26.969 "nsid": 1, 00:13:26.969 "bdev_name": "Malloc1", 00:13:26.969 "name": "Malloc1", 00:13:26.969 "nguid": "D149E89398DC4F8781725CED1553AB12", 00:13:26.969 "uuid": "d149e893-98dc-4f87-8172-5ced1553ab12" 00:13:26.969 }, 00:13:26.969 { 00:13:26.970 "nsid": 2, 00:13:26.970 "bdev_name": "Malloc3", 00:13:26.970 "name": "Malloc3", 00:13:26.970 "nguid": "32D32D486F5F4C4DA9D8750F90B439C5", 00:13:26.970 "uuid": "32d32d48-6f5f-4c4d-a9d8-750f90b439c5" 00:13:26.970 } 00:13:26.970 ] 00:13:26.970 }, 00:13:26.970 { 00:13:26.970 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:26.970 "subtype": "NVMe", 00:13:26.970 "listen_addresses": [ 00:13:26.970 { 00:13:26.970 "trtype": "VFIOUSER", 00:13:26.970 "adrfam": "IPv4", 00:13:26.970 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:26.970 "trsvcid": "0" 00:13:26.970 } 00:13:26.970 ], 00:13:26.970 "allow_any_host": true, 00:13:26.970 "hosts": [], 00:13:26.970 "serial_number": "SPDK2", 00:13:26.970 "model_number": "SPDK bdev Controller", 00:13:26.970 "max_namespaces": 32, 00:13:26.970 "min_cntlid": 1, 00:13:26.970 "max_cntlid": 65519, 00:13:26.970 "namespaces": [ 00:13:26.970 { 00:13:26.970 "nsid": 1, 00:13:26.970 "bdev_name": "Malloc2", 00:13:26.970 "name": "Malloc2", 00:13:26.970 "nguid": "CBF551207F55475C9BB663A1D4E416B6", 00:13:26.970 "uuid": "cbf55120-7f55-475c-9bb6-63a1d4e416b6" 00:13:26.970 }, 00:13:26.970 { 00:13:26.970 "nsid": 2, 00:13:26.970 "bdev_name": "Malloc4", 00:13:26.970 "name": "Malloc4", 00:13:26.970 "nguid": "4705619E34BF411EB1D02A3018272936", 00:13:26.970 "uuid": "4705619e-34bf-411e-b1d0-2a3018272936" 00:13:26.970 } 00:13:26.970 ] 00:13:26.970 } 00:13:26.970 ] 00:13:26.970 11:39:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1897189 00:13:26.970 11:39:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:26.970 11:39:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1889152 00:13:26.970 11:39:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1889152 ']' 00:13:26.970 11:39:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1889152 00:13:26.970 11:39:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:26.970 11:39:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:26.970 11:39:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1889152 00:13:27.229 11:39:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:27.229 11:39:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:27.229 11:39:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1889152' 00:13:27.229 killing process with pid 1889152 00:13:27.229 11:39:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1889152 00:13:27.229 11:39:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1889152 00:13:27.489 11:39:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:27.489 11:39:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:27.489 11:39:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:27.489 11:39:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:27.489 11:39:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:27.489 11:39:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1897390 00:13:27.489 11:39:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1897390' 00:13:27.489 Process pid: 1897390 00:13:27.489 11:39:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:27.489 11:39:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:27.489 11:39:55 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1897390 00:13:27.489 11:39:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1897390 ']' 00:13:27.489 11:39:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.489 11:39:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:27.489 11:39:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.489 11:39:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:27.489 11:39:55 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:27.489 [2024-07-15 11:39:55.423545] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:27.489 [2024-07-15 11:39:55.424460] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:13:27.489 [2024-07-15 11:39:55.424497] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.489 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.489 [2024-07-15 11:39:55.496323] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:27.489 [2024-07-15 11:39:55.568485] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.489 [2024-07-15 11:39:55.568526] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.489 [2024-07-15 11:39:55.568535] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.489 [2024-07-15 11:39:55.568543] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.489 [2024-07-15 11:39:55.568550] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.489 [2024-07-15 11:39:55.568932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.489 [2024-07-15 11:39:55.568954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.489 [2024-07-15 11:39:55.569038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:27.489 [2024-07-15 11:39:55.569040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.749 [2024-07-15 11:39:55.645167] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:27.749 [2024-07-15 11:39:55.645213] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:27.749 [2024-07-15 11:39:55.645381] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:27.749 [2024-07-15 11:39:55.645715] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:27.749 [2024-07-15 11:39:55.645970] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:28.317 11:39:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:28.317 11:39:56 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:28.317 11:39:56 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:29.254 11:39:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:29.513 11:39:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:29.513 11:39:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:29.513 11:39:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:29.513 11:39:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:29.513 11:39:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:29.513 Malloc1 00:13:29.513 11:39:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:29.774 11:39:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:30.033 11:39:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:30.292 11:39:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:30.292 11:39:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:30.292 11:39:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:30.292 Malloc2 00:13:30.292 11:39:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:30.551 11:39:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:30.810 11:39:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:30.810 11:39:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:30.810 11:39:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1897390 00:13:30.810 11:39:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1897390 ']' 00:13:30.810 11:39:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1897390 00:13:30.810 11:39:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:30.810 11:39:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:30.810 11:39:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1897390 00:13:31.070 11:39:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:31.070 11:39:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:31.070 11:39:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1897390' 00:13:31.070 killing process with pid 1897390 00:13:31.070 11:39:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1897390 00:13:31.070 11:39:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1897390 00:13:31.070 11:39:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:31.070 11:39:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:31.070 00:13:31.070 real 0m52.040s 00:13:31.070 user 3m24.827s 00:13:31.070 sys 0m4.699s 00:13:31.070 11:39:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:31.070 11:39:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:31.070 ************************************ 00:13:31.070 END TEST nvmf_vfio_user 00:13:31.070 ************************************ 00:13:31.330 11:39:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:31.330 11:39:59 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:31.330 11:39:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:31.330 11:39:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:31.330 11:39:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:31.330 ************************************ 00:13:31.330 START TEST nvmf_vfio_user_nvme_compliance 00:13:31.330 ************************************ 00:13:31.330 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:31.330 * Looking for test storage... 00:13:31.330 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:31.330 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:31.330 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:31.330 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:31.330 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:31.330 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1898206 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1898206' 00:13:31.331 Process pid: 1898206 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1898206 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1898206 ']' 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.331 11:39:59 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:31.331 [2024-07-15 11:39:59.424414] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:13:31.331 [2024-07-15 11:39:59.424475] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.591 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.591 [2024-07-15 11:39:59.493616] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:31.591 [2024-07-15 11:39:59.567014] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.591 [2024-07-15 11:39:59.567054] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.591 [2024-07-15 11:39:59.567064] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.591 [2024-07-15 11:39:59.567072] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.591 [2024-07-15 11:39:59.567079] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.591 [2024-07-15 11:39:59.567124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.591 [2024-07-15 11:39:59.567201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.591 [2024-07-15 11:39:59.567204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.158 11:40:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.159 11:40:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:32.159 11:40:00 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:33.557 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:33.557 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:33.557 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:33.557 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.557 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:33.557 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.557 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:33.557 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:33.557 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.557 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:33.557 malloc0 00:13:33.557 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.557 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:33.557 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.557 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:33.557 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.557 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:33.557 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.557 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:33.557 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.557 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:33.557 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.557 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:33.558 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.558 11:40:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:33.558 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.558 00:13:33.558 00:13:33.558 CUnit - A unit testing framework for C - Version 2.1-3 00:13:33.558 http://cunit.sourceforge.net/ 00:13:33.558 00:13:33.558 00:13:33.558 Suite: nvme_compliance 00:13:33.558 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 11:40:01.467997] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.558 [2024-07-15 11:40:01.469348] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:33.558 [2024-07-15 11:40:01.469370] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:33.558 [2024-07-15 11:40:01.469381] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:33.558 [2024-07-15 11:40:01.472028] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.558 passed 00:13:33.558 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 11:40:01.549563] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.558 [2024-07-15 11:40:01.552583] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.558 passed 00:13:33.558 Test: admin_identify_ns ...[2024-07-15 11:40:01.632893] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.817 [2024-07-15 11:40:01.693842] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:33.817 [2024-07-15 11:40:01.701842] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:33.817 [2024-07-15 11:40:01.722932] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.817 passed 00:13:33.817 Test: admin_get_features_mandatory_features ...[2024-07-15 11:40:01.795249] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.817 [2024-07-15 11:40:01.798278] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.817 passed 00:13:33.817 Test: admin_get_features_optional_features ...[2024-07-15 11:40:01.873781] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.817 [2024-07-15 11:40:01.876804] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.817 passed 00:13:34.076 Test: admin_set_features_number_of_queues ...[2024-07-15 11:40:01.952252] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.076 [2024-07-15 11:40:02.057930] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.076 passed 00:13:34.076 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 11:40:02.131319] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.076 [2024-07-15 11:40:02.134342] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.076 passed 00:13:34.336 Test: admin_get_log_page_with_lpo ...[2024-07-15 11:40:02.209821] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.336 [2024-07-15 11:40:02.279844] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:34.336 [2024-07-15 11:40:02.292905] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.336 passed 00:13:34.336 Test: fabric_property_get ...[2024-07-15 11:40:02.366295] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.336 [2024-07-15 11:40:02.367530] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:34.336 [2024-07-15 11:40:02.369312] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.336 passed 00:13:34.595 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 11:40:02.444811] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.595 [2024-07-15 11:40:02.447066] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:34.595 [2024-07-15 11:40:02.448844] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.595 passed 00:13:34.595 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 11:40:02.525288] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.595 [2024-07-15 11:40:02.609843] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:34.595 [2024-07-15 11:40:02.625842] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:34.595 [2024-07-15 11:40:02.630934] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.595 passed 00:13:34.854 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 11:40:02.703312] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.854 [2024-07-15 11:40:02.704556] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:34.854 [2024-07-15 11:40:02.706332] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.854 passed 00:13:34.854 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 11:40:02.781810] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.854 [2024-07-15 11:40:02.858841] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:34.854 [2024-07-15 11:40:02.882843] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:34.854 [2024-07-15 11:40:02.887910] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.854 passed 00:13:35.113 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 11:40:02.961195] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:35.113 [2024-07-15 11:40:02.962429] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:35.113 [2024-07-15 11:40:02.962454] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:35.113 [2024-07-15 11:40:02.964220] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:35.113 passed 00:13:35.113 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 11:40:03.037664] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:35.114 [2024-07-15 11:40:03.129837] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:35.114 [2024-07-15 11:40:03.137849] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:35.114 [2024-07-15 11:40:03.145849] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:35.114 [2024-07-15 11:40:03.153836] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:35.114 [2024-07-15 11:40:03.182914] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:35.114 passed 00:13:35.372 Test: admin_create_io_sq_verify_pc ...[2024-07-15 11:40:03.255239] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:35.372 [2024-07-15 11:40:03.274846] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:35.372 [2024-07-15 11:40:03.291420] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:35.372 passed 00:13:35.372 Test: admin_create_io_qp_max_qps ...[2024-07-15 11:40:03.365931] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:36.751 [2024-07-15 11:40:04.456844] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:36.751 [2024-07-15 11:40:04.836256] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:37.010 passed 00:13:37.010 Test: admin_create_io_sq_shared_cq ...[2024-07-15 11:40:04.912839] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:37.010 [2024-07-15 11:40:05.046840] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:37.010 [2024-07-15 11:40:05.083907] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:37.010 passed 00:13:37.010 00:13:37.010 Run Summary: Type Total Ran Passed Failed Inactive 00:13:37.010 suites 1 1 n/a 0 0 00:13:37.010 tests 18 18 18 0 0 00:13:37.010 asserts 360 360 360 0 n/a 00:13:37.010 00:13:37.010 Elapsed time = 1.483 seconds 00:13:37.269 11:40:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1898206 00:13:37.269 11:40:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1898206 ']' 00:13:37.269 11:40:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1898206 00:13:37.269 11:40:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:37.269 11:40:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:37.269 11:40:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1898206 00:13:37.269 11:40:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:37.269 11:40:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:37.269 11:40:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1898206' 00:13:37.269 killing process with pid 1898206 00:13:37.269 11:40:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1898206 00:13:37.269 11:40:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1898206 00:13:37.529 11:40:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:37.529 11:40:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:37.529 00:13:37.529 real 0m6.157s 00:13:37.529 user 0m17.369s 00:13:37.529 sys 0m0.683s 00:13:37.529 11:40:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:37.529 11:40:05 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:37.529 ************************************ 00:13:37.529 END TEST nvmf_vfio_user_nvme_compliance 00:13:37.529 ************************************ 00:13:37.529 11:40:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:37.529 11:40:05 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:37.529 11:40:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:37.529 11:40:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:37.529 11:40:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:37.529 ************************************ 00:13:37.529 START TEST nvmf_vfio_user_fuzz 00:13:37.529 ************************************ 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:37.530 * Looking for test storage... 00:13:37.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1899328 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1899328' 00:13:37.530 Process pid: 1899328 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1899328 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1899328 ']' 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:37.530 11:40:05 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:38.468 11:40:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:38.468 11:40:06 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:38.468 11:40:06 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:39.405 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:39.405 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.405 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:39.405 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.405 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:39.405 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:39.405 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.405 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:39.405 malloc0 00:13:39.405 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.405 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:39.405 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.405 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:39.405 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.405 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:39.405 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.405 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:39.663 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.663 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:39.663 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.663 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:39.663 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.663 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:39.663 11:40:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:11.758 Fuzzing completed. Shutting down the fuzz application 00:14:11.758 00:14:11.758 Dumping successful admin opcodes: 00:14:11.758 8, 9, 10, 24, 00:14:11.758 Dumping successful io opcodes: 00:14:11.758 0, 00:14:11.758 NS: 0x200003a1ef00 I/O qp, Total commands completed: 906096, total successful commands: 3536, random_seed: 2359385664 00:14:11.758 NS: 0x200003a1ef00 admin qp, Total commands completed: 197519, total successful commands: 1579, random_seed: 4292784448 00:14:11.758 11:40:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:11.758 11:40:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.758 11:40:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:11.758 11:40:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.758 11:40:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1899328 00:14:11.758 11:40:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1899328 ']' 00:14:11.758 11:40:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1899328 00:14:11.758 11:40:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:14:11.758 11:40:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:11.758 11:40:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1899328 00:14:11.758 11:40:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:11.758 11:40:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:11.758 11:40:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1899328' 00:14:11.758 killing process with pid 1899328 00:14:11.758 11:40:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1899328 00:14:11.758 11:40:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1899328 00:14:11.758 11:40:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:11.758 11:40:38 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:11.758 00:14:11.758 real 0m32.809s 00:14:11.758 user 0m28.588s 00:14:11.758 sys 0m33.987s 00:14:11.758 11:40:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:11.758 11:40:38 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:11.758 ************************************ 00:14:11.758 END TEST nvmf_vfio_user_fuzz 00:14:11.758 ************************************ 00:14:11.758 11:40:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:11.758 11:40:38 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:11.758 11:40:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:11.758 11:40:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:11.758 11:40:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:11.758 ************************************ 00:14:11.758 START TEST nvmf_host_management 00:14:11.758 ************************************ 00:14:11.758 11:40:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:11.758 * Looking for test storage... 00:14:11.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:11.758 11:40:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:11.758 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:11.758 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.758 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.758 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.758 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.758 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.758 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.758 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:14:11.759 11:40:38 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:17.037 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:17.037 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:17.037 Found net devices under 0000:af:00.0: cvl_0_0 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:17.037 Found net devices under 0000:af:00.1: cvl_0_1 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:17.037 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:17.297 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:17.297 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:17.297 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:17.297 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:17.297 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:17.297 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:17.297 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:17.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:14:17.297 00:14:17.297 --- 10.0.0.2 ping statistics --- 00:14:17.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.297 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:14:17.297 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:17.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.258 ms 00:14:17.297 00:14:17.297 --- 10.0.0.1 ping statistics --- 00:14:17.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.297 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:14:17.297 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.297 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:17.297 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:17.297 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.297 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:17.297 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:17.297 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.297 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:17.297 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:17.586 11:40:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:17.586 11:40:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:17.586 11:40:45 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:17.586 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:17.586 11:40:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:17.586 11:40:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:17.586 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1908007 00:14:17.586 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1908007 00:14:17.586 11:40:45 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:17.586 11:40:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1908007 ']' 00:14:17.586 11:40:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.586 11:40:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:17.586 11:40:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.586 11:40:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:17.586 11:40:45 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:17.586 [2024-07-15 11:40:45.465842] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:14:17.586 [2024-07-15 11:40:45.465891] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:17.586 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.586 [2024-07-15 11:40:45.540565] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:17.586 [2024-07-15 11:40:45.615080] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:17.586 [2024-07-15 11:40:45.615118] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:17.586 [2024-07-15 11:40:45.615128] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:17.586 [2024-07-15 11:40:45.615136] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:17.586 [2024-07-15 11:40:45.615143] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:17.586 [2024-07-15 11:40:45.615243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:17.586 [2024-07-15 11:40:45.615329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:17.586 [2024-07-15 11:40:45.615436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:17.586 [2024-07-15 11:40:45.615437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:18.523 [2024-07-15 11:40:46.317777] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:18.523 Malloc0 00:14:18.523 [2024-07-15 11:40:46.384504] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1908304 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1908304 /var/tmp/bdevperf.sock 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1908304 ']' 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:18.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:18.523 11:40:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:18.523 { 00:14:18.523 "params": { 00:14:18.523 "name": "Nvme$subsystem", 00:14:18.523 "trtype": "$TEST_TRANSPORT", 00:14:18.523 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:18.523 "adrfam": "ipv4", 00:14:18.523 "trsvcid": "$NVMF_PORT", 00:14:18.523 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:18.523 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:18.523 "hdgst": ${hdgst:-false}, 00:14:18.523 "ddgst": ${ddgst:-false} 00:14:18.523 }, 00:14:18.523 "method": "bdev_nvme_attach_controller" 00:14:18.523 } 00:14:18.524 EOF 00:14:18.524 )") 00:14:18.524 11:40:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:18.524 11:40:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:18.524 11:40:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:18.524 11:40:46 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:18.524 "params": { 00:14:18.524 "name": "Nvme0", 00:14:18.524 "trtype": "tcp", 00:14:18.524 "traddr": "10.0.0.2", 00:14:18.524 "adrfam": "ipv4", 00:14:18.524 "trsvcid": "4420", 00:14:18.524 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:18.524 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:18.524 "hdgst": false, 00:14:18.524 "ddgst": false 00:14:18.524 }, 00:14:18.524 "method": "bdev_nvme_attach_controller" 00:14:18.524 }' 00:14:18.524 [2024-07-15 11:40:46.487889] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:14:18.524 [2024-07-15 11:40:46.487939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908304 ] 00:14:18.524 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.524 [2024-07-15 11:40:46.558999] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.524 [2024-07-15 11:40:46.627739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.787 Running I/O for 10 seconds... 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=837 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 837 -ge 100 ']' 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:19.357 [2024-07-15 11:40:47.367721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb76820 is same with the state(5) to be set 00:14:19.357 [2024-07-15 11:40:47.367767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb76820 is same with the state(5) to be set 00:14:19.357 [2024-07-15 11:40:47.367776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb76820 is same with the state(5) to be set 00:14:19.357 [2024-07-15 11:40:47.367786] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb76820 is same with the state(5) to be set 00:14:19.357 [2024-07-15 11:40:47.367794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb76820 is same with the state(5) to be set 00:14:19.357 [2024-07-15 11:40:47.367803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb76820 is same with the state(5) to be set 00:14:19.357 [2024-07-15 11:40:47.367811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb76820 is same with the state(5) to be set 00:14:19.357 [2024-07-15 11:40:47.367820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb76820 is same with the state(5) to be set 00:14:19.357 [2024-07-15 11:40:47.367828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb76820 is same with the state(5) to be set 00:14:19.357 [2024-07-15 11:40:47.367841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb76820 is same with the state(5) to be set 00:14:19.357 [2024-07-15 11:40:47.367850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb76820 is same with the state(5) to be set 00:14:19.357 [2024-07-15 11:40:47.367859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb76820 is same with the state(5) to be set 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.357 11:40:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:19.357 [2024-07-15 11:40:47.375314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.357 [2024-07-15 11:40:47.375349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.357 [2024-07-15 11:40:47.375361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.357 [2024-07-15 11:40:47.375371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.357 [2024-07-15 11:40:47.375381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.357 [2024-07-15 11:40:47.375391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.357 [2024-07-15 11:40:47.375401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:19.357 [2024-07-15 11:40:47.375411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.357 [2024-07-15 11:40:47.375421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2112a70 is same with the state(5) to be set 00:14:19.357 [2024-07-15 11:40:47.376111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.357 [2024-07-15 11:40:47.376134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.357 [2024-07-15 11:40:47.376153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.357 [2024-07-15 11:40:47.376164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.357 [2024-07-15 11:40:47.376176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.357 [2024-07-15 11:40:47.376186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.357 [2024-07-15 11:40:47.376198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.357 [2024-07-15 11:40:47.376209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.357 [2024-07-15 11:40:47.376220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.357 [2024-07-15 11:40:47.376230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.357 [2024-07-15 11:40:47.376241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.357 [2024-07-15 11:40:47.376251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.357 [2024-07-15 11:40:47.376263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.357 [2024-07-15 11:40:47.376273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.357 [2024-07-15 11:40:47.376285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.357 [2024-07-15 11:40:47.376298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:117376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:117504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:117632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:118272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:119424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:119808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:119936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.376986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.376996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.377008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.377017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.377029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.377039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.377051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.377062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.377073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.377083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.377094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.377104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.377117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.377127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.377138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.377148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.377159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.377170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.377181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.377191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.377202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.358 [2024-07-15 11:40:47.377213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.358 [2024-07-15 11:40:47.377223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.359 [2024-07-15 11:40:47.377234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.359 [2024-07-15 11:40:47.377245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.359 [2024-07-15 11:40:47.377255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.359 [2024-07-15 11:40:47.377266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.359 [2024-07-15 11:40:47.377277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.359 [2024-07-15 11:40:47.377288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.359 [2024-07-15 11:40:47.377297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.359 [2024-07-15 11:40:47.377308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:122112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.359 [2024-07-15 11:40:47.377318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.359 [2024-07-15 11:40:47.377328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:122240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.359 [2024-07-15 11:40:47.377338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.359 [2024-07-15 11:40:47.377349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:122368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.359 [2024-07-15 11:40:47.377358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.359 [2024-07-15 11:40:47.377371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.359 [2024-07-15 11:40:47.377380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.359 [2024-07-15 11:40:47.377391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.359 [2024-07-15 11:40:47.377401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.359 [2024-07-15 11:40:47.377412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.359 [2024-07-15 11:40:47.377422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.359 [2024-07-15 11:40:47.377435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.359 [2024-07-15 11:40:47.377445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.359 [2024-07-15 11:40:47.377456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.359 [2024-07-15 11:40:47.377465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.359 [2024-07-15 11:40:47.377477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.359 [2024-07-15 11:40:47.377486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.359 [2024-07-15 11:40:47.377497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:19.359 [2024-07-15 11:40:47.377507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:19.359 [2024-07-15 11:40:47.377572] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2523a30 was disconnected and freed. reset controller. 00:14:19.359 [2024-07-15 11:40:47.378430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:19.359 task offset: 115200 on job bdev=Nvme0n1 fails 00:14:19.359 00:14:19.359 Latency(us) 00:14:19.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.359 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:19.359 Job: Nvme0n1 ended in about 0.59 seconds with error 00:14:19.359 Verification LBA range: start 0x0 length 0x400 00:14:19.359 Nvme0n1 : 0.59 1530.30 95.64 108.82 0.00 38322.50 1756.36 40265.32 00:14:19.359 =================================================================================================================== 00:14:19.359 Total : 1530.30 95.64 108.82 0.00 38322.50 1756.36 40265.32 00:14:19.359 [2024-07-15 11:40:47.379946] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:19.359 [2024-07-15 11:40:47.379964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2112a70 (9): Bad file descriptor 00:14:19.359 11:40:47 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.359 11:40:47 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:19.618 [2024-07-15 11:40:47.524033] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:20.555 11:40:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1908304 00:14:20.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1908304) - No such process 00:14:20.555 11:40:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:20.555 11:40:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:20.555 11:40:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:20.555 11:40:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:20.555 11:40:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:20.555 11:40:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:20.555 11:40:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:20.555 11:40:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:20.555 { 00:14:20.555 "params": { 00:14:20.555 "name": "Nvme$subsystem", 00:14:20.555 "trtype": "$TEST_TRANSPORT", 00:14:20.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:20.555 "adrfam": "ipv4", 00:14:20.555 "trsvcid": "$NVMF_PORT", 00:14:20.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:20.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:20.555 "hdgst": ${hdgst:-false}, 00:14:20.555 "ddgst": ${ddgst:-false} 00:14:20.555 }, 00:14:20.555 "method": "bdev_nvme_attach_controller" 00:14:20.555 } 00:14:20.555 EOF 00:14:20.555 )") 00:14:20.555 11:40:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:20.555 11:40:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:20.555 11:40:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:20.555 11:40:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:20.555 "params": { 00:14:20.555 "name": "Nvme0", 00:14:20.555 "trtype": "tcp", 00:14:20.555 "traddr": "10.0.0.2", 00:14:20.555 "adrfam": "ipv4", 00:14:20.555 "trsvcid": "4420", 00:14:20.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:20.555 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:20.555 "hdgst": false, 00:14:20.555 "ddgst": false 00:14:20.555 }, 00:14:20.555 "method": "bdev_nvme_attach_controller" 00:14:20.555 }' 00:14:20.555 [2024-07-15 11:40:48.440851] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:14:20.555 [2024-07-15 11:40:48.440899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908584 ] 00:14:20.555 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.555 [2024-07-15 11:40:48.510540] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.555 [2024-07-15 11:40:48.576652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.814 Running I/O for 1 seconds... 00:14:21.751 00:14:21.751 Latency(us) 00:14:21.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.751 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:21.751 Verification LBA range: start 0x0 length 0x400 00:14:21.751 Nvme0n1 : 1.02 1511.36 94.46 0.00 0.00 41791.42 8965.32 32086.43 00:14:21.751 =================================================================================================================== 00:14:21.751 Total : 1511.36 94.46 0.00 0.00 41791.42 8965.32 32086.43 00:14:22.010 11:40:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:22.010 11:40:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:22.010 11:40:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:22.010 11:40:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:22.010 11:40:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:22.010 11:40:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:22.010 11:40:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:22.010 11:40:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:22.010 11:40:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:22.010 11:40:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:22.010 11:40:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:22.010 rmmod nvme_tcp 00:14:22.010 rmmod nvme_fabrics 00:14:22.010 rmmod nvme_keyring 00:14:22.010 11:40:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:22.010 11:40:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:22.010 11:40:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:22.010 11:40:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1908007 ']' 00:14:22.010 11:40:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1908007 00:14:22.010 11:40:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1908007 ']' 00:14:22.010 11:40:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1908007 00:14:22.010 11:40:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:14:22.010 11:40:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:22.010 11:40:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1908007 00:14:22.269 11:40:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:22.269 11:40:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:22.269 11:40:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1908007' 00:14:22.269 killing process with pid 1908007 00:14:22.269 11:40:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1908007 00:14:22.269 11:40:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1908007 00:14:22.269 [2024-07-15 11:40:50.300431] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:22.269 11:40:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:22.269 11:40:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:22.269 11:40:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:22.269 11:40:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:22.269 11:40:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:22.269 11:40:50 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.269 11:40:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.269 11:40:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.804 11:40:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:24.804 11:40:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:24.804 00:14:24.804 real 0m14.063s 00:14:24.804 user 0m23.097s 00:14:24.804 sys 0m6.634s 00:14:24.804 11:40:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:24.804 11:40:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:24.804 ************************************ 00:14:24.804 END TEST nvmf_host_management 00:14:24.804 ************************************ 00:14:24.804 11:40:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:24.804 11:40:52 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:24.804 11:40:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:24.804 11:40:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:24.804 11:40:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:24.804 ************************************ 00:14:24.804 START TEST nvmf_lvol 00:14:24.804 ************************************ 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:24.804 * Looking for test storage... 00:14:24.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.804 11:40:52 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:24.805 11:40:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:31.395 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.395 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:31.396 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:31.396 Found net devices under 0000:af:00.0: cvl_0_0 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:31.396 Found net devices under 0000:af:00.1: cvl_0_1 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:31.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:14:31.396 00:14:31.396 --- 10.0.0.2 ping statistics --- 00:14:31.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.396 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.396 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.396 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:14:31.396 00:14:31.396 --- 10.0.0.1 ping statistics --- 00:14:31.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.396 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1912542 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1912542 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1912542 ']' 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.396 11:40:59 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:31.655 [2024-07-15 11:40:59.513144] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:14:31.655 [2024-07-15 11:40:59.513196] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.655 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.655 [2024-07-15 11:40:59.589097] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:31.655 [2024-07-15 11:40:59.663048] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:31.655 [2024-07-15 11:40:59.663086] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:31.655 [2024-07-15 11:40:59.663096] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:31.655 [2024-07-15 11:40:59.663104] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:31.655 [2024-07-15 11:40:59.663111] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:31.655 [2024-07-15 11:40:59.663157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.655 [2024-07-15 11:40:59.663178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:31.655 [2024-07-15 11:40:59.663180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.224 11:41:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.224 11:41:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:14:32.224 11:41:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:32.224 11:41:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:32.224 11:41:00 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:32.483 11:41:00 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.483 11:41:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:32.483 [2024-07-15 11:41:00.523722] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.483 11:41:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:32.742 11:41:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:32.742 11:41:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:33.008 11:41:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:33.008 11:41:00 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:33.008 11:41:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:33.272 11:41:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4ecbbda5-b522-400a-9695-7562c9f95b6b 00:14:33.272 11:41:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4ecbbda5-b522-400a-9695-7562c9f95b6b lvol 20 00:14:33.530 11:41:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=8d007244-2125-4a29-8a7f-86e54d2ab6b7 00:14:33.530 11:41:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:33.789 11:41:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8d007244-2125-4a29-8a7f-86e54d2ab6b7 00:14:33.789 11:41:01 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:34.048 [2024-07-15 11:41:01.985440] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:34.048 11:41:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:34.307 11:41:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:34.307 11:41:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1913094 00:14:34.307 11:41:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:34.307 EAL: No free 2048 kB hugepages reported on node 1 00:14:35.244 11:41:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 8d007244-2125-4a29-8a7f-86e54d2ab6b7 MY_SNAPSHOT 00:14:35.503 11:41:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=6a51b0b9-82d4-4142-9263-97d247405760 00:14:35.503 11:41:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 8d007244-2125-4a29-8a7f-86e54d2ab6b7 30 00:14:35.762 11:41:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 6a51b0b9-82d4-4142-9263-97d247405760 MY_CLONE 00:14:35.762 11:41:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ef690756-1456-494a-b679-708ceb476b76 00:14:35.762 11:41:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ef690756-1456-494a-b679-708ceb476b76 00:14:36.330 11:41:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1913094 00:14:46.328 Initializing NVMe Controllers 00:14:46.328 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:46.328 Controller IO queue size 128, less than required. 00:14:46.328 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:46.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:46.328 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:46.328 Initialization complete. Launching workers. 00:14:46.328 ======================================================== 00:14:46.328 Latency(us) 00:14:46.328 Device Information : IOPS MiB/s Average min max 00:14:46.328 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12822.50 50.09 9983.98 1664.35 58331.89 00:14:46.328 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12725.70 49.71 10061.30 3661.57 51420.20 00:14:46.328 ======================================================== 00:14:46.328 Total : 25548.20 99.80 10022.49 1664.35 58331.89 00:14:46.328 00:14:46.328 11:41:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:46.328 11:41:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8d007244-2125-4a29-8a7f-86e54d2ab6b7 00:14:46.328 11:41:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4ecbbda5-b522-400a-9695-7562c9f95b6b 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:46.328 rmmod nvme_tcp 00:14:46.328 rmmod nvme_fabrics 00:14:46.328 rmmod nvme_keyring 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1912542 ']' 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1912542 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1912542 ']' 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1912542 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1912542 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1912542' 00:14:46.328 killing process with pid 1912542 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1912542 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1912542 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.328 11:41:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:47.761 00:14:47.761 real 0m23.092s 00:14:47.761 user 1m2.203s 00:14:47.761 sys 0m10.167s 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:47.761 ************************************ 00:14:47.761 END TEST nvmf_lvol 00:14:47.761 ************************************ 00:14:47.761 11:41:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:47.761 11:41:15 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:47.761 11:41:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:47.761 11:41:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:47.761 11:41:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:47.761 ************************************ 00:14:47.761 START TEST nvmf_lvs_grow 00:14:47.761 ************************************ 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:47.761 * Looking for test storage... 00:14:47.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:14:47.761 11:41:15 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:14:54.327 Found 0000:af:00.0 (0x8086 - 0x159b) 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:14:54.327 Found 0000:af:00.1 (0x8086 - 0x159b) 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:54.327 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:14:54.328 Found net devices under 0000:af:00.0: cvl_0_0 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:14:54.328 Found net devices under 0000:af:00.1: cvl_0_1 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:54.328 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:54.588 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:54.588 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:54.588 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:54.588 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:54.588 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:54.588 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:54.588 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:54.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:54.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:14:54.588 00:14:54.588 --- 10.0.0.2 ping statistics --- 00:14:54.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.588 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:14:54.588 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:54.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:54.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:14:54.847 00:14:54.847 --- 10.0.0.1 ping statistics --- 00:14:54.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:54.847 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:14:54.848 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:54.848 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:14:54.848 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:54.848 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:54.848 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:54.848 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:54.848 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:54.848 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:54.848 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:54.848 11:41:22 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:54.848 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:54.848 11:41:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:54.848 11:41:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:54.848 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1918636 00:14:54.848 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:54.848 11:41:22 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1918636 00:14:54.848 11:41:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1918636 ']' 00:14:54.848 11:41:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.848 11:41:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:54.848 11:41:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.848 11:41:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:54.848 11:41:22 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:54.848 [2024-07-15 11:41:22.795704] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:14:54.848 [2024-07-15 11:41:22.795748] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.848 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.848 [2024-07-15 11:41:22.869904] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.848 [2024-07-15 11:41:22.945801] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.848 [2024-07-15 11:41:22.945845] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.848 [2024-07-15 11:41:22.945854] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.848 [2024-07-15 11:41:22.945863] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.848 [2024-07-15 11:41:22.945870] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.848 [2024-07-15 11:41:22.945892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.785 11:41:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:55.785 11:41:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:14:55.785 11:41:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:55.785 11:41:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:55.785 11:41:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:55.785 11:41:23 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.785 11:41:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:55.785 [2024-07-15 11:41:23.781320] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.785 11:41:23 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:55.785 11:41:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:14:55.785 11:41:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.785 11:41:23 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:55.785 ************************************ 00:14:55.785 START TEST lvs_grow_clean 00:14:55.785 ************************************ 00:14:55.785 11:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:14:55.785 11:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:55.785 11:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:55.785 11:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:55.785 11:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:55.785 11:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:55.785 11:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:55.785 11:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:55.785 11:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:55.785 11:41:23 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:56.044 11:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:56.044 11:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:56.303 11:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=a5a7257d-0284-4850-939e-4a690cb57b1b 00:14:56.303 11:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5a7257d-0284-4850-939e-4a690cb57b1b 00:14:56.303 11:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:56.303 11:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:56.303 11:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:56.303 11:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a5a7257d-0284-4850-939e-4a690cb57b1b lvol 150 00:14:56.562 11:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a6342d86-196b-49ea-87b2-78a5dc4466ae 00:14:56.562 11:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:56.562 11:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:56.843 [2024-07-15 11:41:24.715543] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:56.843 [2024-07-15 11:41:24.715595] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:56.843 true 00:14:56.843 11:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5a7257d-0284-4850-939e-4a690cb57b1b 00:14:56.843 11:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:56.843 11:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:56.843 11:41:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:57.102 11:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a6342d86-196b-49ea-87b2-78a5dc4466ae 00:14:57.361 11:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:57.361 [2024-07-15 11:41:25.385564] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:57.361 11:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:57.621 11:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1919195 00:14:57.621 11:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:57.621 11:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1919195 /var/tmp/bdevperf.sock 00:14:57.621 11:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1919195 ']' 00:14:57.621 11:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:57.621 11:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:57.621 11:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.621 11:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:57.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:57.621 11:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.621 11:41:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:57.621 [2024-07-15 11:41:25.594357] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:14:57.621 [2024-07-15 11:41:25.594406] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1919195 ] 00:14:57.621 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.621 [2024-07-15 11:41:25.661633] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.881 [2024-07-15 11:41:25.730742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.450 11:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.450 11:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:14:58.450 11:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:58.709 Nvme0n1 00:14:58.709 11:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:58.709 [ 00:14:58.709 { 00:14:58.709 "name": "Nvme0n1", 00:14:58.709 "aliases": [ 00:14:58.709 "a6342d86-196b-49ea-87b2-78a5dc4466ae" 00:14:58.709 ], 00:14:58.709 "product_name": "NVMe disk", 00:14:58.709 "block_size": 4096, 00:14:58.709 "num_blocks": 38912, 00:14:58.709 "uuid": "a6342d86-196b-49ea-87b2-78a5dc4466ae", 00:14:58.709 "assigned_rate_limits": { 00:14:58.709 "rw_ios_per_sec": 0, 00:14:58.709 "rw_mbytes_per_sec": 0, 00:14:58.709 "r_mbytes_per_sec": 0, 00:14:58.709 "w_mbytes_per_sec": 0 00:14:58.709 }, 00:14:58.709 "claimed": false, 00:14:58.709 "zoned": false, 00:14:58.709 "supported_io_types": { 00:14:58.709 "read": true, 00:14:58.709 "write": true, 00:14:58.709 "unmap": true, 00:14:58.709 "flush": true, 00:14:58.709 "reset": true, 00:14:58.709 "nvme_admin": true, 00:14:58.709 "nvme_io": true, 00:14:58.709 "nvme_io_md": false, 00:14:58.709 "write_zeroes": true, 00:14:58.709 "zcopy": false, 00:14:58.709 "get_zone_info": false, 00:14:58.709 "zone_management": false, 00:14:58.709 "zone_append": false, 00:14:58.709 "compare": true, 00:14:58.709 "compare_and_write": true, 00:14:58.709 "abort": true, 00:14:58.709 "seek_hole": false, 00:14:58.709 "seek_data": false, 00:14:58.709 "copy": true, 00:14:58.709 "nvme_iov_md": false 00:14:58.709 }, 00:14:58.709 "memory_domains": [ 00:14:58.709 { 00:14:58.709 "dma_device_id": "system", 00:14:58.709 "dma_device_type": 1 00:14:58.709 } 00:14:58.709 ], 00:14:58.709 "driver_specific": { 00:14:58.709 "nvme": [ 00:14:58.709 { 00:14:58.709 "trid": { 00:14:58.709 "trtype": "TCP", 00:14:58.709 "adrfam": "IPv4", 00:14:58.709 "traddr": "10.0.0.2", 00:14:58.709 "trsvcid": "4420", 00:14:58.709 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:58.709 }, 00:14:58.709 "ctrlr_data": { 00:14:58.709 "cntlid": 1, 00:14:58.709 "vendor_id": "0x8086", 00:14:58.709 "model_number": "SPDK bdev Controller", 00:14:58.709 "serial_number": "SPDK0", 00:14:58.709 "firmware_revision": "24.09", 00:14:58.709 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:58.709 "oacs": { 00:14:58.709 "security": 0, 00:14:58.709 "format": 0, 00:14:58.709 "firmware": 0, 00:14:58.709 "ns_manage": 0 00:14:58.709 }, 00:14:58.709 "multi_ctrlr": true, 00:14:58.709 "ana_reporting": false 00:14:58.709 }, 00:14:58.709 "vs": { 00:14:58.709 "nvme_version": "1.3" 00:14:58.709 }, 00:14:58.709 "ns_data": { 00:14:58.709 "id": 1, 00:14:58.709 "can_share": true 00:14:58.709 } 00:14:58.709 } 00:14:58.709 ], 00:14:58.709 "mp_policy": "active_passive" 00:14:58.709 } 00:14:58.709 } 00:14:58.709 ] 00:14:58.709 11:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1919464 00:14:58.709 11:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:58.709 11:41:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:58.969 Running I/O for 10 seconds... 00:14:59.910 Latency(us) 00:14:59.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.910 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:59.910 Nvme0n1 : 1.00 23900.00 93.36 0.00 0.00 0.00 0.00 0.00 00:14:59.910 =================================================================================================================== 00:14:59.910 Total : 23900.00 93.36 0.00 0.00 0.00 0.00 0.00 00:14:59.910 00:15:00.847 11:41:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a5a7257d-0284-4850-939e-4a690cb57b1b 00:15:00.847 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:00.847 Nvme0n1 : 2.00 24056.00 93.97 0.00 0.00 0.00 0.00 0.00 00:15:00.847 =================================================================================================================== 00:15:00.847 Total : 24056.00 93.97 0.00 0.00 0.00 0.00 0.00 00:15:00.847 00:15:01.107 true 00:15:01.107 11:41:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5a7257d-0284-4850-939e-4a690cb57b1b 00:15:01.107 11:41:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:01.107 11:41:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:01.107 11:41:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:01.107 11:41:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1919464 00:15:02.044 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:02.044 Nvme0n1 : 3.00 24075.67 94.05 0.00 0.00 0.00 0.00 0.00 00:15:02.044 =================================================================================================================== 00:15:02.044 Total : 24075.67 94.05 0.00 0.00 0.00 0.00 0.00 00:15:02.044 00:15:03.015 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.015 Nvme0n1 : 4.00 24011.75 93.80 0.00 0.00 0.00 0.00 0.00 00:15:03.015 =================================================================================================================== 00:15:03.015 Total : 24011.75 93.80 0.00 0.00 0.00 0.00 0.00 00:15:03.015 00:15:03.951 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:03.951 Nvme0n1 : 5.00 24086.40 94.09 0.00 0.00 0.00 0.00 0.00 00:15:03.951 =================================================================================================================== 00:15:03.951 Total : 24086.40 94.09 0.00 0.00 0.00 0.00 0.00 00:15:03.951 00:15:04.886 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:04.886 Nvme0n1 : 6.00 24144.17 94.31 0.00 0.00 0.00 0.00 0.00 00:15:04.886 =================================================================================================================== 00:15:04.886 Total : 24144.17 94.31 0.00 0.00 0.00 0.00 0.00 00:15:04.886 00:15:05.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:05.820 Nvme0n1 : 7.00 24180.29 94.45 0.00 0.00 0.00 0.00 0.00 00:15:05.820 =================================================================================================================== 00:15:05.820 Total : 24180.29 94.45 0.00 0.00 0.00 0.00 0.00 00:15:05.820 00:15:07.199 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:07.199 Nvme0n1 : 8.00 24215.75 94.59 0.00 0.00 0.00 0.00 0.00 00:15:07.199 =================================================================================================================== 00:15:07.199 Total : 24215.75 94.59 0.00 0.00 0.00 0.00 0.00 00:15:07.199 00:15:08.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:08.136 Nvme0n1 : 9.00 24239.56 94.69 0.00 0.00 0.00 0.00 0.00 00:15:08.136 =================================================================================================================== 00:15:08.136 Total : 24239.56 94.69 0.00 0.00 0.00 0.00 0.00 00:15:08.136 00:15:09.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.111 Nvme0n1 : 10.00 24262.50 94.78 0.00 0.00 0.00 0.00 0.00 00:15:09.111 =================================================================================================================== 00:15:09.111 Total : 24262.50 94.78 0.00 0.00 0.00 0.00 0.00 00:15:09.111 00:15:09.111 00:15:09.111 Latency(us) 00:15:09.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.111 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:09.111 Nvme0n1 : 10.00 24263.81 94.78 0.00 0.00 5272.09 3316.12 11377.05 00:15:09.111 =================================================================================================================== 00:15:09.111 Total : 24263.81 94.78 0.00 0.00 5272.09 3316.12 11377.05 00:15:09.111 0 00:15:09.111 11:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1919195 00:15:09.111 11:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1919195 ']' 00:15:09.111 11:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1919195 00:15:09.111 11:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:15:09.111 11:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:09.111 11:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1919195 00:15:09.111 11:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:09.111 11:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:09.111 11:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1919195' 00:15:09.111 killing process with pid 1919195 00:15:09.112 11:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1919195 00:15:09.112 Received shutdown signal, test time was about 10.000000 seconds 00:15:09.112 00:15:09.112 Latency(us) 00:15:09.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:09.112 =================================================================================================================== 00:15:09.112 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:09.112 11:41:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1919195 00:15:09.112 11:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:09.370 11:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:09.628 11:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5a7257d-0284-4850-939e-4a690cb57b1b 00:15:09.628 11:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:09.628 11:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:09.628 11:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:09.628 11:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:09.887 [2024-07-15 11:41:37.848406] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:09.887 11:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5a7257d-0284-4850-939e-4a690cb57b1b 00:15:09.887 11:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:15:09.887 11:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5a7257d-0284-4850-939e-4a690cb57b1b 00:15:09.887 11:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:09.887 11:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.887 11:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:09.887 11:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.887 11:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:09.887 11:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.887 11:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:09.887 11:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:09.887 11:41:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5a7257d-0284-4850-939e-4a690cb57b1b 00:15:10.147 request: 00:15:10.147 { 00:15:10.147 "uuid": "a5a7257d-0284-4850-939e-4a690cb57b1b", 00:15:10.147 "method": "bdev_lvol_get_lvstores", 00:15:10.147 "req_id": 1 00:15:10.147 } 00:15:10.147 Got JSON-RPC error response 00:15:10.147 response: 00:15:10.147 { 00:15:10.147 "code": -19, 00:15:10.147 "message": "No such device" 00:15:10.147 } 00:15:10.147 11:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:15:10.147 11:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:10.147 11:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:10.147 11:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:10.147 11:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:10.147 aio_bdev 00:15:10.147 11:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a6342d86-196b-49ea-87b2-78a5dc4466ae 00:15:10.147 11:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=a6342d86-196b-49ea-87b2-78a5dc4466ae 00:15:10.147 11:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:10.147 11:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:15:10.147 11:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:10.147 11:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:10.147 11:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:10.405 11:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a6342d86-196b-49ea-87b2-78a5dc4466ae -t 2000 00:15:10.663 [ 00:15:10.663 { 00:15:10.663 "name": "a6342d86-196b-49ea-87b2-78a5dc4466ae", 00:15:10.663 "aliases": [ 00:15:10.663 "lvs/lvol" 00:15:10.663 ], 00:15:10.663 "product_name": "Logical Volume", 00:15:10.663 "block_size": 4096, 00:15:10.663 "num_blocks": 38912, 00:15:10.663 "uuid": "a6342d86-196b-49ea-87b2-78a5dc4466ae", 00:15:10.663 "assigned_rate_limits": { 00:15:10.663 "rw_ios_per_sec": 0, 00:15:10.663 "rw_mbytes_per_sec": 0, 00:15:10.663 "r_mbytes_per_sec": 0, 00:15:10.663 "w_mbytes_per_sec": 0 00:15:10.663 }, 00:15:10.663 "claimed": false, 00:15:10.663 "zoned": false, 00:15:10.663 "supported_io_types": { 00:15:10.663 "read": true, 00:15:10.663 "write": true, 00:15:10.663 "unmap": true, 00:15:10.663 "flush": false, 00:15:10.663 "reset": true, 00:15:10.663 "nvme_admin": false, 00:15:10.663 "nvme_io": false, 00:15:10.663 "nvme_io_md": false, 00:15:10.663 "write_zeroes": true, 00:15:10.663 "zcopy": false, 00:15:10.663 "get_zone_info": false, 00:15:10.663 "zone_management": false, 00:15:10.663 "zone_append": false, 00:15:10.663 "compare": false, 00:15:10.663 "compare_and_write": false, 00:15:10.663 "abort": false, 00:15:10.663 "seek_hole": true, 00:15:10.663 "seek_data": true, 00:15:10.663 "copy": false, 00:15:10.663 "nvme_iov_md": false 00:15:10.663 }, 00:15:10.663 "driver_specific": { 00:15:10.663 "lvol": { 00:15:10.663 "lvol_store_uuid": "a5a7257d-0284-4850-939e-4a690cb57b1b", 00:15:10.663 "base_bdev": "aio_bdev", 00:15:10.663 "thin_provision": false, 00:15:10.663 "num_allocated_clusters": 38, 00:15:10.663 "snapshot": false, 00:15:10.663 "clone": false, 00:15:10.663 "esnap_clone": false 00:15:10.663 } 00:15:10.663 } 00:15:10.663 } 00:15:10.663 ] 00:15:10.663 11:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:15:10.663 11:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:10.663 11:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5a7257d-0284-4850-939e-4a690cb57b1b 00:15:10.663 11:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:10.663 11:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5a7257d-0284-4850-939e-4a690cb57b1b 00:15:10.663 11:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:10.922 11:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:10.922 11:41:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a6342d86-196b-49ea-87b2-78a5dc4466ae 00:15:10.922 11:41:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a5a7257d-0284-4850-939e-4a690cb57b1b 00:15:11.179 11:41:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:11.437 11:41:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:11.437 00:15:11.437 real 0m15.542s 00:15:11.437 user 0m14.617s 00:15:11.437 sys 0m1.975s 00:15:11.437 11:41:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:11.437 11:41:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:11.437 ************************************ 00:15:11.437 END TEST lvs_grow_clean 00:15:11.437 ************************************ 00:15:11.437 11:41:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:11.437 11:41:39 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:11.437 11:41:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:11.437 11:41:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:11.437 11:41:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:11.437 ************************************ 00:15:11.437 START TEST lvs_grow_dirty 00:15:11.437 ************************************ 00:15:11.437 11:41:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:15:11.437 11:41:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:11.437 11:41:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:11.437 11:41:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:11.437 11:41:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:11.437 11:41:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:11.437 11:41:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:11.437 11:41:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:11.437 11:41:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:11.437 11:41:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:11.696 11:41:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:11.696 11:41:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:11.955 11:41:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f1e4d20d-a985-41dd-8af9-f9f65a0c2397 00:15:11.955 11:41:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1e4d20d-a985-41dd-8af9-f9f65a0c2397 00:15:11.955 11:41:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:11.955 11:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:11.955 11:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:11.955 11:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f1e4d20d-a985-41dd-8af9-f9f65a0c2397 lvol 150 00:15:12.213 11:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d73f8af1-87b4-41f6-9703-cb4ca839eb20 00:15:12.213 11:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:12.213 11:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:12.472 [2024-07-15 11:41:40.351594] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:12.472 [2024-07-15 11:41:40.351651] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:12.472 true 00:15:12.472 11:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1e4d20d-a985-41dd-8af9-f9f65a0c2397 00:15:12.472 11:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:12.472 11:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:12.472 11:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:12.730 11:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d73f8af1-87b4-41f6-9703-cb4ca839eb20 00:15:12.989 11:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:12.989 [2024-07-15 11:41:40.981492] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.989 11:41:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:13.248 11:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1921899 00:15:13.248 11:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:13.248 11:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1921899 /var/tmp/bdevperf.sock 00:15:13.248 11:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1921899 ']' 00:15:13.248 11:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.248 11:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:13.248 11:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:13.248 11:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.248 11:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:13.248 11:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:13.248 [2024-07-15 11:41:41.193714] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:15:13.248 [2024-07-15 11:41:41.193770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1921899 ] 00:15:13.248 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.248 [2024-07-15 11:41:41.260515] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.248 [2024-07-15 11:41:41.329146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.184 11:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.184 11:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:14.184 11:41:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:14.184 Nvme0n1 00:15:14.184 11:41:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:14.443 [ 00:15:14.443 { 00:15:14.443 "name": "Nvme0n1", 00:15:14.443 "aliases": [ 00:15:14.443 "d73f8af1-87b4-41f6-9703-cb4ca839eb20" 00:15:14.443 ], 00:15:14.443 "product_name": "NVMe disk", 00:15:14.443 "block_size": 4096, 00:15:14.443 "num_blocks": 38912, 00:15:14.443 "uuid": "d73f8af1-87b4-41f6-9703-cb4ca839eb20", 00:15:14.443 "assigned_rate_limits": { 00:15:14.443 "rw_ios_per_sec": 0, 00:15:14.443 "rw_mbytes_per_sec": 0, 00:15:14.443 "r_mbytes_per_sec": 0, 00:15:14.443 "w_mbytes_per_sec": 0 00:15:14.443 }, 00:15:14.443 "claimed": false, 00:15:14.443 "zoned": false, 00:15:14.443 "supported_io_types": { 00:15:14.443 "read": true, 00:15:14.443 "write": true, 00:15:14.443 "unmap": true, 00:15:14.443 "flush": true, 00:15:14.443 "reset": true, 00:15:14.443 "nvme_admin": true, 00:15:14.443 "nvme_io": true, 00:15:14.443 "nvme_io_md": false, 00:15:14.443 "write_zeroes": true, 00:15:14.443 "zcopy": false, 00:15:14.443 "get_zone_info": false, 00:15:14.443 "zone_management": false, 00:15:14.443 "zone_append": false, 00:15:14.443 "compare": true, 00:15:14.443 "compare_and_write": true, 00:15:14.443 "abort": true, 00:15:14.443 "seek_hole": false, 00:15:14.443 "seek_data": false, 00:15:14.443 "copy": true, 00:15:14.443 "nvme_iov_md": false 00:15:14.443 }, 00:15:14.443 "memory_domains": [ 00:15:14.443 { 00:15:14.443 "dma_device_id": "system", 00:15:14.443 "dma_device_type": 1 00:15:14.443 } 00:15:14.443 ], 00:15:14.443 "driver_specific": { 00:15:14.443 "nvme": [ 00:15:14.443 { 00:15:14.443 "trid": { 00:15:14.443 "trtype": "TCP", 00:15:14.443 "adrfam": "IPv4", 00:15:14.443 "traddr": "10.0.0.2", 00:15:14.443 "trsvcid": "4420", 00:15:14.443 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:14.443 }, 00:15:14.443 "ctrlr_data": { 00:15:14.443 "cntlid": 1, 00:15:14.443 "vendor_id": "0x8086", 00:15:14.443 "model_number": "SPDK bdev Controller", 00:15:14.443 "serial_number": "SPDK0", 00:15:14.443 "firmware_revision": "24.09", 00:15:14.443 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:14.443 "oacs": { 00:15:14.443 "security": 0, 00:15:14.443 "format": 0, 00:15:14.443 "firmware": 0, 00:15:14.443 "ns_manage": 0 00:15:14.443 }, 00:15:14.443 "multi_ctrlr": true, 00:15:14.443 "ana_reporting": false 00:15:14.443 }, 00:15:14.443 "vs": { 00:15:14.443 "nvme_version": "1.3" 00:15:14.443 }, 00:15:14.443 "ns_data": { 00:15:14.443 "id": 1, 00:15:14.443 "can_share": true 00:15:14.443 } 00:15:14.443 } 00:15:14.443 ], 00:15:14.443 "mp_policy": "active_passive" 00:15:14.443 } 00:15:14.443 } 00:15:14.443 ] 00:15:14.443 11:41:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1922162 00:15:14.443 11:41:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:14.443 11:41:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:14.443 Running I/O for 10 seconds... 00:15:15.379 Latency(us) 00:15:15.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.379 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:15.379 Nvme0n1 : 1.00 24042.00 93.91 0.00 0.00 0.00 0.00 0.00 00:15:15.379 =================================================================================================================== 00:15:15.379 Total : 24042.00 93.91 0.00 0.00 0.00 0.00 0.00 00:15:15.379 00:15:16.314 11:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f1e4d20d-a985-41dd-8af9-f9f65a0c2397 00:15:16.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:16.573 Nvme0n1 : 2.00 24240.00 94.69 0.00 0.00 0.00 0.00 0.00 00:15:16.573 =================================================================================================================== 00:15:16.573 Total : 24240.00 94.69 0.00 0.00 0.00 0.00 0.00 00:15:16.573 00:15:16.573 true 00:15:16.573 11:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1e4d20d-a985-41dd-8af9-f9f65a0c2397 00:15:16.573 11:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:16.832 11:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:16.832 11:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:16.832 11:41:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1922162 00:15:17.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:17.400 Nvme0n1 : 3.00 24299.00 94.92 0.00 0.00 0.00 0.00 0.00 00:15:17.400 =================================================================================================================== 00:15:17.400 Total : 24299.00 94.92 0.00 0.00 0.00 0.00 0.00 00:15:17.400 00:15:18.778 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:18.778 Nvme0n1 : 4.00 24355.50 95.14 0.00 0.00 0.00 0.00 0.00 00:15:18.778 =================================================================================================================== 00:15:18.778 Total : 24355.50 95.14 0.00 0.00 0.00 0.00 0.00 00:15:18.778 00:15:19.715 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:19.715 Nvme0n1 : 5.00 24399.40 95.31 0.00 0.00 0.00 0.00 0.00 00:15:19.715 =================================================================================================================== 00:15:19.715 Total : 24399.40 95.31 0.00 0.00 0.00 0.00 0.00 00:15:19.715 00:15:20.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:20.660 Nvme0n1 : 6.00 24447.83 95.50 0.00 0.00 0.00 0.00 0.00 00:15:20.660 =================================================================================================================== 00:15:20.660 Total : 24447.83 95.50 0.00 0.00 0.00 0.00 0.00 00:15:20.660 00:15:21.597 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:21.597 Nvme0n1 : 7.00 24484.43 95.64 0.00 0.00 0.00 0.00 0.00 00:15:21.597 =================================================================================================================== 00:15:21.597 Total : 24484.43 95.64 0.00 0.00 0.00 0.00 0.00 00:15:21.597 00:15:22.532 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:22.532 Nvme0n1 : 8.00 24463.88 95.56 0.00 0.00 0.00 0.00 0.00 00:15:22.532 =================================================================================================================== 00:15:22.532 Total : 24463.88 95.56 0.00 0.00 0.00 0.00 0.00 00:15:22.532 00:15:23.466 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:23.466 Nvme0n1 : 9.00 24470.67 95.59 0.00 0.00 0.00 0.00 0.00 00:15:23.467 =================================================================================================================== 00:15:23.467 Total : 24470.67 95.59 0.00 0.00 0.00 0.00 0.00 00:15:23.467 00:15:24.405 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:24.405 Nvme0n1 : 10.00 24494.00 95.68 0.00 0.00 0.00 0.00 0.00 00:15:24.405 =================================================================================================================== 00:15:24.405 Total : 24494.00 95.68 0.00 0.00 0.00 0.00 0.00 00:15:24.405 00:15:24.405 00:15:24.405 Latency(us) 00:15:24.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.405 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:24.405 Nvme0n1 : 10.00 24491.50 95.67 0.00 0.00 5223.01 3198.16 16462.64 00:15:24.405 =================================================================================================================== 00:15:24.405 Total : 24491.50 95.67 0.00 0.00 5223.01 3198.16 16462.64 00:15:24.405 0 00:15:24.405 11:41:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1921899 00:15:24.405 11:41:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1921899 ']' 00:15:24.405 11:41:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1921899 00:15:24.405 11:41:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:15:24.405 11:41:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:24.405 11:41:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1921899 00:15:24.664 11:41:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:24.664 11:41:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:24.664 11:41:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1921899' 00:15:24.664 killing process with pid 1921899 00:15:24.664 11:41:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1921899 00:15:24.664 Received shutdown signal, test time was about 10.000000 seconds 00:15:24.664 00:15:24.664 Latency(us) 00:15:24.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.664 =================================================================================================================== 00:15:24.664 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:24.664 11:41:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1921899 00:15:24.664 11:41:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:24.924 11:41:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:25.183 11:41:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1e4d20d-a985-41dd-8af9-f9f65a0c2397 00:15:25.183 11:41:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:25.183 11:41:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:25.183 11:41:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:25.183 11:41:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1918636 00:15:25.183 11:41:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1918636 00:15:25.442 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1918636 Killed "${NVMF_APP[@]}" "$@" 00:15:25.442 11:41:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:25.442 11:41:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:25.442 11:41:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:25.442 11:41:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:25.442 11:41:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:25.442 11:41:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1924013 00:15:25.442 11:41:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1924013 00:15:25.442 11:41:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1924013 ']' 00:15:25.442 11:41:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.442 11:41:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:25.442 11:41:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.442 11:41:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:25.442 11:41:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:25.442 11:41:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:25.442 [2024-07-15 11:41:53.357732] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:15:25.442 [2024-07-15 11:41:53.357788] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.442 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.442 [2024-07-15 11:41:53.432963] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.442 [2024-07-15 11:41:53.504389] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.442 [2024-07-15 11:41:53.504427] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.442 [2024-07-15 11:41:53.504436] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.442 [2024-07-15 11:41:53.504444] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.442 [2024-07-15 11:41:53.504450] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.442 [2024-07-15 11:41:53.504470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.379 11:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:26.379 11:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:26.379 11:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:26.379 11:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:26.379 11:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:26.379 11:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.379 11:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:26.379 [2024-07-15 11:41:54.345631] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:26.379 [2024-07-15 11:41:54.345717] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:26.379 [2024-07-15 11:41:54.345744] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:26.379 11:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:26.379 11:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d73f8af1-87b4-41f6-9703-cb4ca839eb20 00:15:26.379 11:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=d73f8af1-87b4-41f6-9703-cb4ca839eb20 00:15:26.379 11:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:26.379 11:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:26.379 11:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:26.379 11:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:26.379 11:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:26.638 11:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d73f8af1-87b4-41f6-9703-cb4ca839eb20 -t 2000 00:15:26.638 [ 00:15:26.639 { 00:15:26.639 "name": "d73f8af1-87b4-41f6-9703-cb4ca839eb20", 00:15:26.639 "aliases": [ 00:15:26.639 "lvs/lvol" 00:15:26.639 ], 00:15:26.639 "product_name": "Logical Volume", 00:15:26.639 "block_size": 4096, 00:15:26.639 "num_blocks": 38912, 00:15:26.639 "uuid": "d73f8af1-87b4-41f6-9703-cb4ca839eb20", 00:15:26.639 "assigned_rate_limits": { 00:15:26.639 "rw_ios_per_sec": 0, 00:15:26.639 "rw_mbytes_per_sec": 0, 00:15:26.639 "r_mbytes_per_sec": 0, 00:15:26.639 "w_mbytes_per_sec": 0 00:15:26.639 }, 00:15:26.639 "claimed": false, 00:15:26.639 "zoned": false, 00:15:26.639 "supported_io_types": { 00:15:26.639 "read": true, 00:15:26.639 "write": true, 00:15:26.639 "unmap": true, 00:15:26.639 "flush": false, 00:15:26.639 "reset": true, 00:15:26.639 "nvme_admin": false, 00:15:26.639 "nvme_io": false, 00:15:26.639 "nvme_io_md": false, 00:15:26.639 "write_zeroes": true, 00:15:26.639 "zcopy": false, 00:15:26.639 "get_zone_info": false, 00:15:26.639 "zone_management": false, 00:15:26.639 "zone_append": false, 00:15:26.639 "compare": false, 00:15:26.639 "compare_and_write": false, 00:15:26.639 "abort": false, 00:15:26.639 "seek_hole": true, 00:15:26.639 "seek_data": true, 00:15:26.639 "copy": false, 00:15:26.639 "nvme_iov_md": false 00:15:26.639 }, 00:15:26.639 "driver_specific": { 00:15:26.639 "lvol": { 00:15:26.639 "lvol_store_uuid": "f1e4d20d-a985-41dd-8af9-f9f65a0c2397", 00:15:26.639 "base_bdev": "aio_bdev", 00:15:26.639 "thin_provision": false, 00:15:26.639 "num_allocated_clusters": 38, 00:15:26.639 "snapshot": false, 00:15:26.639 "clone": false, 00:15:26.639 "esnap_clone": false 00:15:26.639 } 00:15:26.639 } 00:15:26.639 } 00:15:26.639 ] 00:15:26.639 11:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:26.639 11:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1e4d20d-a985-41dd-8af9-f9f65a0c2397 00:15:26.639 11:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:26.928 11:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:26.928 11:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1e4d20d-a985-41dd-8af9-f9f65a0c2397 00:15:26.928 11:41:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:27.188 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:27.188 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:27.188 [2024-07-15 11:41:55.205898] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:27.188 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1e4d20d-a985-41dd-8af9-f9f65a0c2397 00:15:27.188 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:15:27.188 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1e4d20d-a985-41dd-8af9-f9f65a0c2397 00:15:27.188 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:27.188 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.188 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:27.188 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.188 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:27.188 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:27.188 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:27.188 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:27.188 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1e4d20d-a985-41dd-8af9-f9f65a0c2397 00:15:27.447 request: 00:15:27.447 { 00:15:27.447 "uuid": "f1e4d20d-a985-41dd-8af9-f9f65a0c2397", 00:15:27.447 "method": "bdev_lvol_get_lvstores", 00:15:27.447 "req_id": 1 00:15:27.447 } 00:15:27.447 Got JSON-RPC error response 00:15:27.447 response: 00:15:27.447 { 00:15:27.447 "code": -19, 00:15:27.447 "message": "No such device" 00:15:27.447 } 00:15:27.447 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:15:27.447 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:27.447 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:27.447 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:27.448 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:27.707 aio_bdev 00:15:27.707 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d73f8af1-87b4-41f6-9703-cb4ca839eb20 00:15:27.708 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=d73f8af1-87b4-41f6-9703-cb4ca839eb20 00:15:27.708 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:27.708 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:27.708 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:27.708 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:27.708 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:27.708 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d73f8af1-87b4-41f6-9703-cb4ca839eb20 -t 2000 00:15:27.967 [ 00:15:27.967 { 00:15:27.967 "name": "d73f8af1-87b4-41f6-9703-cb4ca839eb20", 00:15:27.967 "aliases": [ 00:15:27.967 "lvs/lvol" 00:15:27.967 ], 00:15:27.967 "product_name": "Logical Volume", 00:15:27.967 "block_size": 4096, 00:15:27.967 "num_blocks": 38912, 00:15:27.967 "uuid": "d73f8af1-87b4-41f6-9703-cb4ca839eb20", 00:15:27.967 "assigned_rate_limits": { 00:15:27.967 "rw_ios_per_sec": 0, 00:15:27.967 "rw_mbytes_per_sec": 0, 00:15:27.967 "r_mbytes_per_sec": 0, 00:15:27.967 "w_mbytes_per_sec": 0 00:15:27.967 }, 00:15:27.967 "claimed": false, 00:15:27.967 "zoned": false, 00:15:27.967 "supported_io_types": { 00:15:27.967 "read": true, 00:15:27.967 "write": true, 00:15:27.967 "unmap": true, 00:15:27.967 "flush": false, 00:15:27.967 "reset": true, 00:15:27.967 "nvme_admin": false, 00:15:27.967 "nvme_io": false, 00:15:27.967 "nvme_io_md": false, 00:15:27.967 "write_zeroes": true, 00:15:27.967 "zcopy": false, 00:15:27.967 "get_zone_info": false, 00:15:27.967 "zone_management": false, 00:15:27.967 "zone_append": false, 00:15:27.967 "compare": false, 00:15:27.967 "compare_and_write": false, 00:15:27.967 "abort": false, 00:15:27.967 "seek_hole": true, 00:15:27.967 "seek_data": true, 00:15:27.967 "copy": false, 00:15:27.967 "nvme_iov_md": false 00:15:27.967 }, 00:15:27.967 "driver_specific": { 00:15:27.967 "lvol": { 00:15:27.967 "lvol_store_uuid": "f1e4d20d-a985-41dd-8af9-f9f65a0c2397", 00:15:27.967 "base_bdev": "aio_bdev", 00:15:27.967 "thin_provision": false, 00:15:27.967 "num_allocated_clusters": 38, 00:15:27.967 "snapshot": false, 00:15:27.967 "clone": false, 00:15:27.967 "esnap_clone": false 00:15:27.967 } 00:15:27.967 } 00:15:27.967 } 00:15:27.967 ] 00:15:27.967 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:27.967 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1e4d20d-a985-41dd-8af9-f9f65a0c2397 00:15:27.967 11:41:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:28.227 11:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:28.227 11:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f1e4d20d-a985-41dd-8af9-f9f65a0c2397 00:15:28.227 11:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:28.227 11:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:28.227 11:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d73f8af1-87b4-41f6-9703-cb4ca839eb20 00:15:28.487 11:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f1e4d20d-a985-41dd-8af9-f9f65a0c2397 00:15:28.746 11:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:28.746 11:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:28.746 00:15:28.746 real 0m17.381s 00:15:28.746 user 0m43.548s 00:15:28.746 sys 0m4.715s 00:15:28.746 11:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:28.746 11:41:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:28.746 ************************************ 00:15:28.746 END TEST lvs_grow_dirty 00:15:28.746 ************************************ 00:15:29.005 11:41:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:29.005 11:41:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:29.005 11:41:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:15:29.005 11:41:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:15:29.005 11:41:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:29.005 11:41:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:29.005 11:41:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:29.005 11:41:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:29.005 11:41:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:29.005 11:41:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:29.005 nvmf_trace.0 00:15:29.005 11:41:56 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:15:29.005 11:41:56 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:29.005 11:41:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:29.005 11:41:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:29.005 11:41:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:29.005 11:41:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:29.005 11:41:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:29.005 11:41:56 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:29.005 rmmod nvme_tcp 00:15:29.005 rmmod nvme_fabrics 00:15:29.005 rmmod nvme_keyring 00:15:29.005 11:41:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:29.005 11:41:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:29.005 11:41:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:29.005 11:41:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1924013 ']' 00:15:29.005 11:41:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1924013 00:15:29.005 11:41:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1924013 ']' 00:15:29.005 11:41:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1924013 00:15:29.005 11:41:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:15:29.005 11:41:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:29.005 11:41:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1924013 00:15:29.005 11:41:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:29.005 11:41:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:29.005 11:41:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1924013' 00:15:29.005 killing process with pid 1924013 00:15:29.005 11:41:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1924013 00:15:29.005 11:41:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1924013 00:15:29.265 11:41:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:29.265 11:41:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:29.265 11:41:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:29.265 11:41:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:29.265 11:41:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:29.265 11:41:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.265 11:41:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:29.265 11:41:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.800 11:41:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:31.800 00:15:31.800 real 0m43.677s 00:15:31.800 user 1m4.397s 00:15:31.800 sys 0m12.444s 00:15:31.800 11:41:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:31.800 11:41:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:31.800 ************************************ 00:15:31.800 END TEST nvmf_lvs_grow 00:15:31.800 ************************************ 00:15:31.801 11:41:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:31.801 11:41:59 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:31.801 11:41:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:31.801 11:41:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:31.801 11:41:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:31.801 ************************************ 00:15:31.801 START TEST nvmf_bdev_io_wait 00:15:31.801 ************************************ 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:31.801 * Looking for test storage... 00:15:31.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:31.801 11:41:59 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:38.364 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:38.364 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:38.364 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:38.364 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:38.364 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:38.364 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:38.364 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:38.364 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:38.364 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:38.364 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:38.364 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:38.364 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:38.364 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:38.364 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:38.364 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:38.364 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:38.364 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:38.365 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:38.365 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:38.365 Found net devices under 0000:af:00.0: cvl_0_0 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:38.365 Found net devices under 0000:af:00.1: cvl_0_1 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:38.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:38.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:15:38.365 00:15:38.365 --- 10.0.0.2 ping statistics --- 00:15:38.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.365 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:38.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:38.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:15:38.365 00:15:38.365 --- 10.0.0.1 ping statistics --- 00:15:38.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:38.365 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:38.365 11:42:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:38.365 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:38.365 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:38.365 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:38.365 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:38.365 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1928826 00:15:38.365 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:38.365 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1928826 00:15:38.365 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1928826 ']' 00:15:38.365 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.365 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:38.365 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.365 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:38.365 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:38.365 [2024-07-15 11:42:06.064911] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:15:38.365 [2024-07-15 11:42:06.064956] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.365 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.365 [2024-07-15 11:42:06.138763] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:38.365 [2024-07-15 11:42:06.214756] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.365 [2024-07-15 11:42:06.214796] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.365 [2024-07-15 11:42:06.214805] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:38.365 [2024-07-15 11:42:06.214814] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:38.365 [2024-07-15 11:42:06.214821] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.365 [2024-07-15 11:42:06.214872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.365 [2024-07-15 11:42:06.214969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.365 [2024-07-15 11:42:06.215055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:38.365 [2024-07-15 11:42:06.215057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.933 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:38.933 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:15:38.933 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:38.933 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:38.933 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:38.933 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:38.933 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:38.933 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.933 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:38.933 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.933 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:38.933 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.933 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:38.933 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.933 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:38.933 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.933 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:38.933 [2024-07-15 11:42:06.992128] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:38.933 11:42:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.933 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:38.933 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.933 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:38.933 Malloc0 00:15:38.933 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.933 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:38.933 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.933 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:39.192 [2024-07-15 11:42:07.055419] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1929111 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1929113 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:39.192 { 00:15:39.192 "params": { 00:15:39.192 "name": "Nvme$subsystem", 00:15:39.192 "trtype": "$TEST_TRANSPORT", 00:15:39.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.192 "adrfam": "ipv4", 00:15:39.192 "trsvcid": "$NVMF_PORT", 00:15:39.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.192 "hdgst": ${hdgst:-false}, 00:15:39.192 "ddgst": ${ddgst:-false} 00:15:39.192 }, 00:15:39.192 "method": "bdev_nvme_attach_controller" 00:15:39.192 } 00:15:39.192 EOF 00:15:39.192 )") 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1929115 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:39.192 { 00:15:39.192 "params": { 00:15:39.192 "name": "Nvme$subsystem", 00:15:39.192 "trtype": "$TEST_TRANSPORT", 00:15:39.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.192 "adrfam": "ipv4", 00:15:39.192 "trsvcid": "$NVMF_PORT", 00:15:39.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.192 "hdgst": ${hdgst:-false}, 00:15:39.192 "ddgst": ${ddgst:-false} 00:15:39.192 }, 00:15:39.192 "method": "bdev_nvme_attach_controller" 00:15:39.192 } 00:15:39.192 EOF 00:15:39.192 )") 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1929118 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:39.192 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:39.192 { 00:15:39.192 "params": { 00:15:39.192 "name": "Nvme$subsystem", 00:15:39.192 "trtype": "$TEST_TRANSPORT", 00:15:39.192 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.192 "adrfam": "ipv4", 00:15:39.192 "trsvcid": "$NVMF_PORT", 00:15:39.192 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.192 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.192 "hdgst": ${hdgst:-false}, 00:15:39.192 "ddgst": ${ddgst:-false} 00:15:39.192 }, 00:15:39.192 "method": "bdev_nvme_attach_controller" 00:15:39.192 } 00:15:39.192 EOF 00:15:39.192 )") 00:15:39.193 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:39.193 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:39.193 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:39.193 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:39.193 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:39.193 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:39.193 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:39.193 { 00:15:39.193 "params": { 00:15:39.193 "name": "Nvme$subsystem", 00:15:39.193 "trtype": "$TEST_TRANSPORT", 00:15:39.193 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.193 "adrfam": "ipv4", 00:15:39.193 "trsvcid": "$NVMF_PORT", 00:15:39.193 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.193 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.193 "hdgst": ${hdgst:-false}, 00:15:39.193 "ddgst": ${ddgst:-false} 00:15:39.193 }, 00:15:39.193 "method": "bdev_nvme_attach_controller" 00:15:39.193 } 00:15:39.193 EOF 00:15:39.193 )") 00:15:39.193 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:39.193 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1929111 00:15:39.193 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:39.193 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:39.193 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:39.193 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:39.193 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:39.193 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:39.193 "params": { 00:15:39.193 "name": "Nvme1", 00:15:39.193 "trtype": "tcp", 00:15:39.193 "traddr": "10.0.0.2", 00:15:39.193 "adrfam": "ipv4", 00:15:39.193 "trsvcid": "4420", 00:15:39.193 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.193 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:39.193 "hdgst": false, 00:15:39.193 "ddgst": false 00:15:39.193 }, 00:15:39.193 "method": "bdev_nvme_attach_controller" 00:15:39.193 }' 00:15:39.193 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:39.193 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:39.193 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:39.193 "params": { 00:15:39.193 "name": "Nvme1", 00:15:39.193 "trtype": "tcp", 00:15:39.193 "traddr": "10.0.0.2", 00:15:39.193 "adrfam": "ipv4", 00:15:39.193 "trsvcid": "4420", 00:15:39.193 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.193 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:39.193 "hdgst": false, 00:15:39.193 "ddgst": false 00:15:39.193 }, 00:15:39.193 "method": "bdev_nvme_attach_controller" 00:15:39.193 }' 00:15:39.193 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:39.193 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:39.193 "params": { 00:15:39.193 "name": "Nvme1", 00:15:39.193 "trtype": "tcp", 00:15:39.193 "traddr": "10.0.0.2", 00:15:39.193 "adrfam": "ipv4", 00:15:39.193 "trsvcid": "4420", 00:15:39.193 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.193 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:39.193 "hdgst": false, 00:15:39.193 "ddgst": false 00:15:39.193 }, 00:15:39.193 "method": "bdev_nvme_attach_controller" 00:15:39.193 }' 00:15:39.193 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:39.193 11:42:07 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:39.193 "params": { 00:15:39.193 "name": "Nvme1", 00:15:39.193 "trtype": "tcp", 00:15:39.193 "traddr": "10.0.0.2", 00:15:39.193 "adrfam": "ipv4", 00:15:39.193 "trsvcid": "4420", 00:15:39.193 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.193 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:39.193 "hdgst": false, 00:15:39.193 "ddgst": false 00:15:39.193 }, 00:15:39.193 "method": "bdev_nvme_attach_controller" 00:15:39.193 }' 00:15:39.193 [2024-07-15 11:42:07.107829] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:15:39.193 [2024-07-15 11:42:07.107890] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:39.193 [2024-07-15 11:42:07.108306] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:15:39.193 [2024-07-15 11:42:07.108350] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:39.193 [2024-07-15 11:42:07.110144] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:15:39.193 [2024-07-15 11:42:07.110143] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:15:39.193 [2024-07-15 11:42:07.110195] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 11:42:07.110195] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:39.193 --proc-type=auto ] 00:15:39.193 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.193 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.452 [2024-07-15 11:42:07.304096] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.452 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.452 [2024-07-15 11:42:07.377788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:39.452 [2024-07-15 11:42:07.420888] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.452 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.452 [2024-07-15 11:42:07.474146] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.452 [2024-07-15 11:42:07.510283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:39.452 [2024-07-15 11:42:07.539301] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.452 [2024-07-15 11:42:07.550566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:39.711 [2024-07-15 11:42:07.613045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:15:39.711 Running I/O for 1 seconds... 00:15:39.711 Running I/O for 1 seconds... 00:15:39.711 Running I/O for 1 seconds... 00:15:39.711 Running I/O for 1 seconds... 00:15:40.649 00:15:40.649 Latency(us) 00:15:40.649 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.649 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:40.649 Nvme1n1 : 1.00 256491.00 1001.92 0.00 0.00 497.01 207.26 681.57 00:15:40.649 =================================================================================================================== 00:15:40.649 Total : 256491.00 1001.92 0.00 0.00 497.01 207.26 681.57 00:15:40.908 00:15:40.908 Latency(us) 00:15:40.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.908 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:40.908 Nvme1n1 : 1.01 13825.14 54.00 0.00 0.00 9230.29 5452.60 18979.23 00:15:40.908 =================================================================================================================== 00:15:40.908 Total : 13825.14 54.00 0.00 0.00 9230.29 5452.60 18979.23 00:15:40.908 00:15:40.908 Latency(us) 00:15:40.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.908 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:40.908 Nvme1n1 : 1.01 10249.61 40.04 0.00 0.00 12444.59 6107.96 23383.24 00:15:40.908 =================================================================================================================== 00:15:40.908 Total : 10249.61 40.04 0.00 0.00 12444.59 6107.96 23383.24 00:15:40.908 00:15:40.908 Latency(us) 00:15:40.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.908 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:40.908 Nvme1n1 : 1.01 10153.53 39.66 0.00 0.00 12568.59 6212.81 26424.12 00:15:40.908 =================================================================================================================== 00:15:40.908 Total : 10153.53 39.66 0.00 0.00 12568.59 6212.81 26424.12 00:15:41.166 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1929113 00:15:41.166 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1929115 00:15:41.166 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1929118 00:15:41.167 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:41.167 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.167 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:41.167 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.167 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:41.167 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:41.167 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:41.167 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:41.167 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:41.167 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:41.167 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:41.167 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:41.167 rmmod nvme_tcp 00:15:41.167 rmmod nvme_fabrics 00:15:41.167 rmmod nvme_keyring 00:15:41.167 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:41.167 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:41.167 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:41.167 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1928826 ']' 00:15:41.167 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1928826 00:15:41.167 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1928826 ']' 00:15:41.167 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1928826 00:15:41.167 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:15:41.167 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:41.167 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1928826 00:15:41.426 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:41.426 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:41.426 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1928826' 00:15:41.426 killing process with pid 1928826 00:15:41.426 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1928826 00:15:41.426 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1928826 00:15:41.426 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:41.426 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:41.426 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:41.426 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:41.426 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:41.426 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.426 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.426 11:42:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.960 11:42:11 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:43.960 00:15:43.960 real 0m12.141s 00:15:43.960 user 0m20.142s 00:15:43.960 sys 0m7.026s 00:15:43.960 11:42:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:43.960 11:42:11 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:43.960 ************************************ 00:15:43.960 END TEST nvmf_bdev_io_wait 00:15:43.960 ************************************ 00:15:43.960 11:42:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:43.960 11:42:11 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:43.960 11:42:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:43.960 11:42:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:43.960 11:42:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:43.960 ************************************ 00:15:43.960 START TEST nvmf_queue_depth 00:15:43.960 ************************************ 00:15:43.960 11:42:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:43.960 * Looking for test storage... 00:15:43.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:43.960 11:42:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:43.960 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:15:43.960 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:43.960 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:43.960 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:43.960 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:43.960 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:43.960 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:43.960 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:43.960 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:43.960 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:43.960 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:43.960 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:15:43.960 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:15:43.960 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:43.960 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:43.960 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:15:43.961 11:42:11 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:15:50.522 Found 0000:af:00.0 (0x8086 - 0x159b) 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:15:50.522 Found 0000:af:00.1 (0x8086 - 0x159b) 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:15:50.522 Found net devices under 0000:af:00.0: cvl_0_0 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:50.522 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:15:50.523 Found net devices under 0000:af:00.1: cvl_0_1 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:50.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:50.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:15:50.523 00:15:50.523 --- 10.0.0.2 ping statistics --- 00:15:50.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.523 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:50.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:50.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:15:50.523 00:15:50.523 --- 10.0.0.1 ping statistics --- 00:15:50.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:50.523 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1933088 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1933088 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1933088 ']' 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:50.523 11:42:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:50.523 [2024-07-15 11:42:18.469048] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:15:50.523 [2024-07-15 11:42:18.469103] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:50.523 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.523 [2024-07-15 11:42:18.543290] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.523 [2024-07-15 11:42:18.615732] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:50.523 [2024-07-15 11:42:18.615769] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:50.523 [2024-07-15 11:42:18.615778] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:50.523 [2024-07-15 11:42:18.615787] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:50.523 [2024-07-15 11:42:18.615794] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:50.523 [2024-07-15 11:42:18.615819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:51.458 [2024-07-15 11:42:19.310437] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:51.458 Malloc0 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:51.458 [2024-07-15 11:42:19.365896] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1933251 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1933251 /var/tmp/bdevperf.sock 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1933251 ']' 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:51.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:51.458 11:42:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:51.458 [2024-07-15 11:42:19.416186] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:15:51.458 [2024-07-15 11:42:19.416234] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1933251 ] 00:15:51.458 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.458 [2024-07-15 11:42:19.484550] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.458 [2024-07-15 11:42:19.554259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.396 11:42:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:52.396 11:42:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:15:52.396 11:42:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:52.396 11:42:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.396 11:42:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:52.396 NVMe0n1 00:15:52.396 11:42:20 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.396 11:42:20 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:52.654 Running I/O for 10 seconds... 00:16:02.696 00:16:02.696 Latency(us) 00:16:02.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.696 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:02.696 Verification LBA range: start 0x0 length 0x4000 00:16:02.696 NVMe0n1 : 10.06 13098.09 51.16 0.00 0.00 77900.52 19084.08 51170.51 00:16:02.696 =================================================================================================================== 00:16:02.696 Total : 13098.09 51.16 0.00 0.00 77900.52 19084.08 51170.51 00:16:02.696 0 00:16:02.696 11:42:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1933251 00:16:02.696 11:42:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1933251 ']' 00:16:02.696 11:42:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1933251 00:16:02.696 11:42:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:02.696 11:42:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:02.696 11:42:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1933251 00:16:02.696 11:42:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:02.696 11:42:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:02.696 11:42:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1933251' 00:16:02.696 killing process with pid 1933251 00:16:02.696 11:42:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1933251 00:16:02.696 Received shutdown signal, test time was about 10.000000 seconds 00:16:02.696 00:16:02.696 Latency(us) 00:16:02.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.696 =================================================================================================================== 00:16:02.696 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:02.696 11:42:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1933251 00:16:02.955 11:42:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:02.955 11:42:30 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:02.955 11:42:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:02.955 11:42:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:16:02.955 11:42:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:02.955 11:42:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:16:02.955 11:42:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:02.955 11:42:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:02.955 rmmod nvme_tcp 00:16:02.955 rmmod nvme_fabrics 00:16:02.955 rmmod nvme_keyring 00:16:02.955 11:42:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:02.955 11:42:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:16:02.955 11:42:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:16:02.955 11:42:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1933088 ']' 00:16:02.955 11:42:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1933088 00:16:02.955 11:42:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1933088 ']' 00:16:02.955 11:42:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1933088 00:16:02.955 11:42:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:02.955 11:42:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:02.955 11:42:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1933088 00:16:02.955 11:42:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:02.955 11:42:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:02.955 11:42:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1933088' 00:16:02.955 killing process with pid 1933088 00:16:02.955 11:42:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1933088 00:16:02.955 11:42:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1933088 00:16:03.214 11:42:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:03.214 11:42:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:03.214 11:42:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:03.214 11:42:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:03.214 11:42:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:03.214 11:42:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.214 11:42:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:03.214 11:42:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.747 11:42:33 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:05.747 00:16:05.747 real 0m21.623s 00:16:05.748 user 0m24.947s 00:16:05.748 sys 0m7.070s 00:16:05.748 11:42:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:05.748 11:42:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:05.748 ************************************ 00:16:05.748 END TEST nvmf_queue_depth 00:16:05.748 ************************************ 00:16:05.748 11:42:33 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:05.748 11:42:33 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:05.748 11:42:33 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:05.748 11:42:33 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:05.748 11:42:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:05.748 ************************************ 00:16:05.748 START TEST nvmf_target_multipath 00:16:05.748 ************************************ 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:05.748 * Looking for test storage... 00:16:05.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:16:05.748 11:42:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:12.318 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:12.318 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:12.318 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:12.319 Found net devices under 0000:af:00.0: cvl_0_0 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:12.319 Found net devices under 0000:af:00.1: cvl_0_1 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:12.319 11:42:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:12.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:12.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:16:12.319 00:16:12.319 --- 10.0.0.2 ping statistics --- 00:16:12.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.319 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:12.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:12.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:16:12.319 00:16:12.319 --- 10.0.0.1 ping statistics --- 00:16:12.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.319 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:12.319 only one NIC for nvmf test 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:12.319 rmmod nvme_tcp 00:16:12.319 rmmod nvme_fabrics 00:16:12.319 rmmod nvme_keyring 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.319 11:42:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:14.857 00:16:14.857 real 0m9.090s 00:16:14.857 user 0m1.930s 00:16:14.857 sys 0m5.196s 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:14.857 11:42:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:14.857 ************************************ 00:16:14.857 END TEST nvmf_target_multipath 00:16:14.857 ************************************ 00:16:14.857 11:42:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:14.857 11:42:42 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:14.857 11:42:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:14.857 11:42:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:14.857 11:42:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:14.857 ************************************ 00:16:14.857 START TEST nvmf_zcopy 00:16:14.857 ************************************ 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:14.857 * Looking for test storage... 00:16:14.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.857 11:42:42 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:14.858 11:42:42 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:21.427 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:21.427 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:21.427 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:21.427 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:21.427 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:21.427 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:21.427 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:21.427 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:21.427 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:21.427 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:21.427 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:21.427 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:21.427 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:21.428 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:21.428 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:21.428 Found net devices under 0000:af:00.0: cvl_0_0 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:21.428 Found net devices under 0000:af:00.1: cvl_0_1 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:21.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:21.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:16:21.428 00:16:21.428 --- 10.0.0.2 ping statistics --- 00:16:21.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.428 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:21.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:21.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:16:21.428 00:16:21.428 --- 10.0.0.1 ping statistics --- 00:16:21.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.428 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:21.428 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:21.429 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:21.429 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:21.429 11:42:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:21.429 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:21.429 11:42:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:21.429 11:42:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:21.429 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1942561 00:16:21.429 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:21.429 11:42:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1942561 00:16:21.429 11:42:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1942561 ']' 00:16:21.429 11:42:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.429 11:42:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:21.429 11:42:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.429 11:42:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:21.429 11:42:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:21.429 [2024-07-15 11:42:49.455126] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:21.429 [2024-07-15 11:42:49.455171] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.429 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.429 [2024-07-15 11:42:49.527741] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.689 [2024-07-15 11:42:49.598697] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.689 [2024-07-15 11:42:49.598736] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.689 [2024-07-15 11:42:49.598746] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.689 [2024-07-15 11:42:49.598755] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.689 [2024-07-15 11:42:49.598762] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.689 [2024-07-15 11:42:49.598784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:22.257 [2024-07-15 11:42:50.296353] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:22.257 [2024-07-15 11:42:50.312509] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:22.257 malloc0 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:22.257 { 00:16:22.257 "params": { 00:16:22.257 "name": "Nvme$subsystem", 00:16:22.257 "trtype": "$TEST_TRANSPORT", 00:16:22.257 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:22.257 "adrfam": "ipv4", 00:16:22.257 "trsvcid": "$NVMF_PORT", 00:16:22.257 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:22.257 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:22.257 "hdgst": ${hdgst:-false}, 00:16:22.257 "ddgst": ${ddgst:-false} 00:16:22.257 }, 00:16:22.257 "method": "bdev_nvme_attach_controller" 00:16:22.257 } 00:16:22.257 EOF 00:16:22.257 )") 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:22.257 11:42:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:22.516 11:42:50 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:22.516 "params": { 00:16:22.516 "name": "Nvme1", 00:16:22.516 "trtype": "tcp", 00:16:22.516 "traddr": "10.0.0.2", 00:16:22.516 "adrfam": "ipv4", 00:16:22.516 "trsvcid": "4420", 00:16:22.516 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:22.516 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:22.516 "hdgst": false, 00:16:22.516 "ddgst": false 00:16:22.516 }, 00:16:22.516 "method": "bdev_nvme_attach_controller" 00:16:22.516 }' 00:16:22.516 [2024-07-15 11:42:50.395711] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:22.516 [2024-07-15 11:42:50.395759] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1942604 ] 00:16:22.516 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.516 [2024-07-15 11:42:50.464863] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.516 [2024-07-15 11:42:50.535893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.774 Running I/O for 10 seconds... 00:16:32.747 00:16:32.747 Latency(us) 00:16:32.747 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.747 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:32.747 Verification LBA range: start 0x0 length 0x1000 00:16:32.747 Nvme1n1 : 10.05 8920.19 69.69 0.00 0.00 14250.85 2490.37 42152.76 00:16:32.747 =================================================================================================================== 00:16:32.747 Total : 8920.19 69.69 0.00 0.00 14250.85 2490.37 42152.76 00:16:33.026 11:43:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1944433 00:16:33.026 11:43:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:33.026 11:43:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:33.026 11:43:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:33.026 11:43:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:33.026 11:43:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:33.026 11:43:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:33.026 11:43:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:33.026 11:43:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:33.026 { 00:16:33.026 "params": { 00:16:33.026 "name": "Nvme$subsystem", 00:16:33.026 "trtype": "$TEST_TRANSPORT", 00:16:33.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:33.026 "adrfam": "ipv4", 00:16:33.026 "trsvcid": "$NVMF_PORT", 00:16:33.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:33.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:33.026 "hdgst": ${hdgst:-false}, 00:16:33.026 "ddgst": ${ddgst:-false} 00:16:33.026 }, 00:16:33.026 "method": "bdev_nvme_attach_controller" 00:16:33.026 } 00:16:33.026 EOF 00:16:33.026 )") 00:16:33.026 [2024-07-15 11:43:00.952228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.026 [2024-07-15 11:43:00.952265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.026 11:43:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:33.026 11:43:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:33.026 11:43:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:33.026 11:43:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:33.026 "params": { 00:16:33.026 "name": "Nvme1", 00:16:33.026 "trtype": "tcp", 00:16:33.026 "traddr": "10.0.0.2", 00:16:33.026 "adrfam": "ipv4", 00:16:33.026 "trsvcid": "4420", 00:16:33.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:33.026 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:33.026 "hdgst": false, 00:16:33.026 "ddgst": false 00:16:33.026 }, 00:16:33.026 "method": "bdev_nvme_attach_controller" 00:16:33.026 }' 00:16:33.026 [2024-07-15 11:43:00.964224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.026 [2024-07-15 11:43:00.964241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.026 [2024-07-15 11:43:00.976249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.026 [2024-07-15 11:43:00.976261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.026 [2024-07-15 11:43:00.988279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.026 [2024-07-15 11:43:00.988291] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.026 [2024-07-15 11:43:00.989038] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:33.026 [2024-07-15 11:43:00.989087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1944433 ] 00:16:33.026 [2024-07-15 11:43:01.000311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.026 [2024-07-15 11:43:01.000324] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.026 [2024-07-15 11:43:01.012344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.026 [2024-07-15 11:43:01.012356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.026 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.026 [2024-07-15 11:43:01.024377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.026 [2024-07-15 11:43:01.024388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.026 [2024-07-15 11:43:01.036408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.026 [2024-07-15 11:43:01.036420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.026 [2024-07-15 11:43:01.048439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.026 [2024-07-15 11:43:01.048450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.026 [2024-07-15 11:43:01.058550] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.026 [2024-07-15 11:43:01.060470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.026 [2024-07-15 11:43:01.060481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.026 [2024-07-15 11:43:01.072504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.026 [2024-07-15 11:43:01.072517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.026 [2024-07-15 11:43:01.084534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.026 [2024-07-15 11:43:01.084549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.026 [2024-07-15 11:43:01.096569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.026 [2024-07-15 11:43:01.096590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.026 [2024-07-15 11:43:01.108601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.026 [2024-07-15 11:43:01.108616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.120633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.120648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.131826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.342 [2024-07-15 11:43:01.132663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.132675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.144704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.144722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.156733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.156750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.168762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.168775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.180791] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.180803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.192824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.192839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.204855] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.204867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.216901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.216921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.228928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.228942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.240963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.240979] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.252995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.253010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.265026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.265037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.277058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.277069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.289093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.289105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.301129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.301145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.313158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.313171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.325194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.325207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.337230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.337243] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.349264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.349278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.361296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.361307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.373329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.373340] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.385363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.385376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.397403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.397422] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 Running I/O for 5 seconds... 00:16:33.342 [2024-07-15 11:43:01.409427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.409439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.427186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.427209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.342 [2024-07-15 11:43:01.441456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.342 [2024-07-15 11:43:01.441478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.601 [2024-07-15 11:43:01.455053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.601 [2024-07-15 11:43:01.455075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.601 [2024-07-15 11:43:01.468373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.601 [2024-07-15 11:43:01.468393] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.601 [2024-07-15 11:43:01.481681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.601 [2024-07-15 11:43:01.481701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.601 [2024-07-15 11:43:01.495230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.601 [2024-07-15 11:43:01.495250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.601 [2024-07-15 11:43:01.508575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.601 [2024-07-15 11:43:01.508595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.601 [2024-07-15 11:43:01.521686] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.601 [2024-07-15 11:43:01.521706] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.601 [2024-07-15 11:43:01.535206] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.601 [2024-07-15 11:43:01.535226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.601 [2024-07-15 11:43:01.548990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.601 [2024-07-15 11:43:01.549015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.601 [2024-07-15 11:43:01.562785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.601 [2024-07-15 11:43:01.562808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.601 [2024-07-15 11:43:01.576234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.601 [2024-07-15 11:43:01.576254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.601 [2024-07-15 11:43:01.589622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.601 [2024-07-15 11:43:01.589642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.601 [2024-07-15 11:43:01.602997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.602 [2024-07-15 11:43:01.603018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.602 [2024-07-15 11:43:01.616550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.602 [2024-07-15 11:43:01.616570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.602 [2024-07-15 11:43:01.630018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.602 [2024-07-15 11:43:01.630038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.602 [2024-07-15 11:43:01.643645] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.602 [2024-07-15 11:43:01.643665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.602 [2024-07-15 11:43:01.658285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.602 [2024-07-15 11:43:01.658306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.602 [2024-07-15 11:43:01.673653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.602 [2024-07-15 11:43:01.673674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.602 [2024-07-15 11:43:01.687301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.602 [2024-07-15 11:43:01.687322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.602 [2024-07-15 11:43:01.700731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.602 [2024-07-15 11:43:01.700753] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.861 [2024-07-15 11:43:01.714228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.861 [2024-07-15 11:43:01.714249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.861 [2024-07-15 11:43:01.727693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.861 [2024-07-15 11:43:01.727714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.861 [2024-07-15 11:43:01.741302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.861 [2024-07-15 11:43:01.741323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.861 [2024-07-15 11:43:01.754930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.861 [2024-07-15 11:43:01.754951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.861 [2024-07-15 11:43:01.768653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.861 [2024-07-15 11:43:01.768674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.861 [2024-07-15 11:43:01.782411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.861 [2024-07-15 11:43:01.782432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.861 [2024-07-15 11:43:01.796369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.861 [2024-07-15 11:43:01.796390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.861 [2024-07-15 11:43:01.807393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.861 [2024-07-15 11:43:01.807417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.861 [2024-07-15 11:43:01.822256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.861 [2024-07-15 11:43:01.822277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.861 [2024-07-15 11:43:01.837311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.861 [2024-07-15 11:43:01.837332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.861 [2024-07-15 11:43:01.850941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.861 [2024-07-15 11:43:01.850961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.862 [2024-07-15 11:43:01.864501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.862 [2024-07-15 11:43:01.864521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.862 [2024-07-15 11:43:01.878420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.862 [2024-07-15 11:43:01.878441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.862 [2024-07-15 11:43:01.891838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.862 [2024-07-15 11:43:01.891859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.862 [2024-07-15 11:43:01.905381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.862 [2024-07-15 11:43:01.905401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.862 [2024-07-15 11:43:01.918667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.862 [2024-07-15 11:43:01.918687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.862 [2024-07-15 11:43:01.931914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.862 [2024-07-15 11:43:01.931933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.862 [2024-07-15 11:43:01.945362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.862 [2024-07-15 11:43:01.945382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:33.862 [2024-07-15 11:43:01.959063] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:33.862 [2024-07-15 11:43:01.959083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.121 [2024-07-15 11:43:01.972400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.121 [2024-07-15 11:43:01.972420] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.121 [2024-07-15 11:43:01.986338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.121 [2024-07-15 11:43:01.986358] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.121 [2024-07-15 11:43:01.999675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.121 [2024-07-15 11:43:01.999695] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.121 [2024-07-15 11:43:02.013323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.121 [2024-07-15 11:43:02.013342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.121 [2024-07-15 11:43:02.026817] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.121 [2024-07-15 11:43:02.026842] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.121 [2024-07-15 11:43:02.040438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.121 [2024-07-15 11:43:02.040457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.121 [2024-07-15 11:43:02.053945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.121 [2024-07-15 11:43:02.053964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.121 [2024-07-15 11:43:02.067551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.121 [2024-07-15 11:43:02.067575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.122 [2024-07-15 11:43:02.081127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.122 [2024-07-15 11:43:02.081147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.122 [2024-07-15 11:43:02.094901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.122 [2024-07-15 11:43:02.094921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.122 [2024-07-15 11:43:02.108145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.122 [2024-07-15 11:43:02.108166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.122 [2024-07-15 11:43:02.121581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.122 [2024-07-15 11:43:02.121601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.122 [2024-07-15 11:43:02.135174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.122 [2024-07-15 11:43:02.135194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.122 [2024-07-15 11:43:02.148867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.122 [2024-07-15 11:43:02.148886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.122 [2024-07-15 11:43:02.162267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.122 [2024-07-15 11:43:02.162287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.122 [2024-07-15 11:43:02.175313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.122 [2024-07-15 11:43:02.175332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.122 [2024-07-15 11:43:02.188244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.122 [2024-07-15 11:43:02.188263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.122 [2024-07-15 11:43:02.201480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.122 [2024-07-15 11:43:02.201499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.122 [2024-07-15 11:43:02.215303] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.122 [2024-07-15 11:43:02.215323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.381 [2024-07-15 11:43:02.229076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.381 [2024-07-15 11:43:02.229095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.381 [2024-07-15 11:43:02.242419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.381 [2024-07-15 11:43:02.242439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.381 [2024-07-15 11:43:02.255647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.381 [2024-07-15 11:43:02.255667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.381 [2024-07-15 11:43:02.269072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.381 [2024-07-15 11:43:02.269092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.381 [2024-07-15 11:43:02.283126] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.381 [2024-07-15 11:43:02.283146] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.381 [2024-07-15 11:43:02.293763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.381 [2024-07-15 11:43:02.293783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.381 [2024-07-15 11:43:02.307656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.381 [2024-07-15 11:43:02.307675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.381 [2024-07-15 11:43:02.321067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.381 [2024-07-15 11:43:02.321087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.381 [2024-07-15 11:43:02.334659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.381 [2024-07-15 11:43:02.334679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.381 [2024-07-15 11:43:02.347920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.381 [2024-07-15 11:43:02.347941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.381 [2024-07-15 11:43:02.361511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.381 [2024-07-15 11:43:02.361531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.381 [2024-07-15 11:43:02.375169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.381 [2024-07-15 11:43:02.375190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.381 [2024-07-15 11:43:02.387993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.381 [2024-07-15 11:43:02.388012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.381 [2024-07-15 11:43:02.401814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.381 [2024-07-15 11:43:02.401841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.381 [2024-07-15 11:43:02.415310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.381 [2024-07-15 11:43:02.415330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.381 [2024-07-15 11:43:02.428792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.381 [2024-07-15 11:43:02.428813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.381 [2024-07-15 11:43:02.442256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.381 [2024-07-15 11:43:02.442275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.381 [2024-07-15 11:43:02.455417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.381 [2024-07-15 11:43:02.455436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.381 [2024-07-15 11:43:02.468800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.381 [2024-07-15 11:43:02.468820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.381 [2024-07-15 11:43:02.482257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.381 [2024-07-15 11:43:02.482277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.641 [2024-07-15 11:43:02.496003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.641 [2024-07-15 11:43:02.496024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.641 [2024-07-15 11:43:02.509239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.641 [2024-07-15 11:43:02.509259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.641 [2024-07-15 11:43:02.522769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.641 [2024-07-15 11:43:02.522788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.641 [2024-07-15 11:43:02.537091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.641 [2024-07-15 11:43:02.537111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.641 [2024-07-15 11:43:02.552566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.641 [2024-07-15 11:43:02.552585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.641 [2024-07-15 11:43:02.566152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.641 [2024-07-15 11:43:02.566172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.641 [2024-07-15 11:43:02.579705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.641 [2024-07-15 11:43:02.579725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.641 [2024-07-15 11:43:02.593803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.641 [2024-07-15 11:43:02.593823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.641 [2024-07-15 11:43:02.605439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.641 [2024-07-15 11:43:02.605459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.641 [2024-07-15 11:43:02.618690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.641 [2024-07-15 11:43:02.618710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.641 [2024-07-15 11:43:02.632372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.641 [2024-07-15 11:43:02.632391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.641 [2024-07-15 11:43:02.645837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.641 [2024-07-15 11:43:02.645857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.641 [2024-07-15 11:43:02.659133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.641 [2024-07-15 11:43:02.659153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.641 [2024-07-15 11:43:02.672808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.641 [2024-07-15 11:43:02.672829] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.641 [2024-07-15 11:43:02.686930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.641 [2024-07-15 11:43:02.686950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.641 [2024-07-15 11:43:02.702478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.641 [2024-07-15 11:43:02.702498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.641 [2024-07-15 11:43:02.717425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.641 [2024-07-15 11:43:02.717446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.641 [2024-07-15 11:43:02.728437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.641 [2024-07-15 11:43:02.728457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.641 [2024-07-15 11:43:02.742232] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.641 [2024-07-15 11:43:02.742252] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.900 [2024-07-15 11:43:02.755220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.900 [2024-07-15 11:43:02.755240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.900 [2024-07-15 11:43:02.768697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.900 [2024-07-15 11:43:02.768717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.900 [2024-07-15 11:43:02.781814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.900 [2024-07-15 11:43:02.781841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.900 [2024-07-15 11:43:02.795552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.900 [2024-07-15 11:43:02.795572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.900 [2024-07-15 11:43:02.809076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.900 [2024-07-15 11:43:02.809096] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.900 [2024-07-15 11:43:02.822342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.900 [2024-07-15 11:43:02.822362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.900 [2024-07-15 11:43:02.835889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.900 [2024-07-15 11:43:02.835911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.900 [2024-07-15 11:43:02.849575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.900 [2024-07-15 11:43:02.849597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.900 [2024-07-15 11:43:02.863152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.900 [2024-07-15 11:43:02.863172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.900 [2024-07-15 11:43:02.876212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.900 [2024-07-15 11:43:02.876233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.900 [2024-07-15 11:43:02.889517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.900 [2024-07-15 11:43:02.889538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.900 [2024-07-15 11:43:02.903138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.900 [2024-07-15 11:43:02.903161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.900 [2024-07-15 11:43:02.916864] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.900 [2024-07-15 11:43:02.916885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.900 [2024-07-15 11:43:02.930155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.900 [2024-07-15 11:43:02.930176] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.900 [2024-07-15 11:43:02.943850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.900 [2024-07-15 11:43:02.943870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.900 [2024-07-15 11:43:02.957418] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.900 [2024-07-15 11:43:02.957438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.900 [2024-07-15 11:43:02.970523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.900 [2024-07-15 11:43:02.970544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.900 [2024-07-15 11:43:02.984010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.900 [2024-07-15 11:43:02.984031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:34.900 [2024-07-15 11:43:02.997430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:34.900 [2024-07-15 11:43:02.997451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.159 [2024-07-15 11:43:03.010945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.159 [2024-07-15 11:43:03.010965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.159 [2024-07-15 11:43:03.024406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.159 [2024-07-15 11:43:03.024426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.159 [2024-07-15 11:43:03.037534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.159 [2024-07-15 11:43:03.037554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.159 [2024-07-15 11:43:03.051061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.159 [2024-07-15 11:43:03.051081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.159 [2024-07-15 11:43:03.064546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.159 [2024-07-15 11:43:03.064566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.159 [2024-07-15 11:43:03.077880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.159 [2024-07-15 11:43:03.077900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.159 [2024-07-15 11:43:03.092124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.159 [2024-07-15 11:43:03.092144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.159 [2024-07-15 11:43:03.105276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.159 [2024-07-15 11:43:03.105296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.159 [2024-07-15 11:43:03.119086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.159 [2024-07-15 11:43:03.119107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.159 [2024-07-15 11:43:03.131966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.159 [2024-07-15 11:43:03.131986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.159 [2024-07-15 11:43:03.145282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.159 [2024-07-15 11:43:03.145303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.159 [2024-07-15 11:43:03.158657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.159 [2024-07-15 11:43:03.158678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.159 [2024-07-15 11:43:03.172293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.159 [2024-07-15 11:43:03.172313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.159 [2024-07-15 11:43:03.186352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.159 [2024-07-15 11:43:03.186372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.159 [2024-07-15 11:43:03.202349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.159 [2024-07-15 11:43:03.202369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.159 [2024-07-15 11:43:03.215816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.159 [2024-07-15 11:43:03.215844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.159 [2024-07-15 11:43:03.228923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.159 [2024-07-15 11:43:03.228944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.159 [2024-07-15 11:43:03.242202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.159 [2024-07-15 11:43:03.242222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.159 [2024-07-15 11:43:03.256071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.159 [2024-07-15 11:43:03.256092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.417 [2024-07-15 11:43:03.266729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.417 [2024-07-15 11:43:03.266750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.417 [2024-07-15 11:43:03.280803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.417 [2024-07-15 11:43:03.280823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.417 [2024-07-15 11:43:03.294369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.417 [2024-07-15 11:43:03.294390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.417 [2024-07-15 11:43:03.307982] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.417 [2024-07-15 11:43:03.308002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.417 [2024-07-15 11:43:03.321377] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.418 [2024-07-15 11:43:03.321397] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.418 [2024-07-15 11:43:03.334881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.418 [2024-07-15 11:43:03.334905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.418 [2024-07-15 11:43:03.348533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.418 [2024-07-15 11:43:03.348554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.418 [2024-07-15 11:43:03.361748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.418 [2024-07-15 11:43:03.361770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.418 [2024-07-15 11:43:03.374803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.418 [2024-07-15 11:43:03.374824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.418 [2024-07-15 11:43:03.388547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.418 [2024-07-15 11:43:03.388567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.418 [2024-07-15 11:43:03.401609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.418 [2024-07-15 11:43:03.401630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.418 [2024-07-15 11:43:03.415194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.418 [2024-07-15 11:43:03.415215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.418 [2024-07-15 11:43:03.428328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.418 [2024-07-15 11:43:03.428349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.418 [2024-07-15 11:43:03.441806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.418 [2024-07-15 11:43:03.441827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.418 [2024-07-15 11:43:03.455322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.418 [2024-07-15 11:43:03.455342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.418 [2024-07-15 11:43:03.469181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.418 [2024-07-15 11:43:03.469201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.418 [2024-07-15 11:43:03.480295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.418 [2024-07-15 11:43:03.480315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.418 [2024-07-15 11:43:03.494403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.418 [2024-07-15 11:43:03.494423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.418 [2024-07-15 11:43:03.508332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.418 [2024-07-15 11:43:03.508353] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.418 [2024-07-15 11:43:03.521849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.418 [2024-07-15 11:43:03.521869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.677 [2024-07-15 11:43:03.535201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.677 [2024-07-15 11:43:03.535222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.677 [2024-07-15 11:43:03.548644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.677 [2024-07-15 11:43:03.548664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.677 [2024-07-15 11:43:03.562215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.677 [2024-07-15 11:43:03.562235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.677 [2024-07-15 11:43:03.575814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.677 [2024-07-15 11:43:03.575841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.677 [2024-07-15 11:43:03.589654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.677 [2024-07-15 11:43:03.589679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.677 [2024-07-15 11:43:03.603373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.677 [2024-07-15 11:43:03.603394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.677 [2024-07-15 11:43:03.617199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.677 [2024-07-15 11:43:03.617220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.677 [2024-07-15 11:43:03.630441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.677 [2024-07-15 11:43:03.630461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.677 [2024-07-15 11:43:03.643879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.677 [2024-07-15 11:43:03.643899] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.677 [2024-07-15 11:43:03.657125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.677 [2024-07-15 11:43:03.657144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.677 [2024-07-15 11:43:03.670861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.677 [2024-07-15 11:43:03.670881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.677 [2024-07-15 11:43:03.684613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.677 [2024-07-15 11:43:03.684633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.677 [2024-07-15 11:43:03.695745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.677 [2024-07-15 11:43:03.695765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.677 [2024-07-15 11:43:03.709394] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.677 [2024-07-15 11:43:03.709414] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.677 [2024-07-15 11:43:03.722822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.677 [2024-07-15 11:43:03.722847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.677 [2024-07-15 11:43:03.736314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.677 [2024-07-15 11:43:03.736334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.677 [2024-07-15 11:43:03.749483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.677 [2024-07-15 11:43:03.749503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.677 [2024-07-15 11:43:03.762882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.677 [2024-07-15 11:43:03.762902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.677 [2024-07-15 11:43:03.776737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.677 [2024-07-15 11:43:03.776758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.936 [2024-07-15 11:43:03.789605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.936 [2024-07-15 11:43:03.789625] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.936 [2024-07-15 11:43:03.804049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.936 [2024-07-15 11:43:03.804069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.936 [2024-07-15 11:43:03.819644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.936 [2024-07-15 11:43:03.819665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.936 [2024-07-15 11:43:03.832611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.936 [2024-07-15 11:43:03.832631] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.936 [2024-07-15 11:43:03.846386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.936 [2024-07-15 11:43:03.846410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.936 [2024-07-15 11:43:03.860065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.936 [2024-07-15 11:43:03.860084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.936 [2024-07-15 11:43:03.873725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.936 [2024-07-15 11:43:03.873745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.936 [2024-07-15 11:43:03.887123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.936 [2024-07-15 11:43:03.887143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.936 [2024-07-15 11:43:03.900737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.936 [2024-07-15 11:43:03.900757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.936 [2024-07-15 11:43:03.911744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.936 [2024-07-15 11:43:03.911764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.936 [2024-07-15 11:43:03.925540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.936 [2024-07-15 11:43:03.925561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.936 [2024-07-15 11:43:03.939167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.936 [2024-07-15 11:43:03.939188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.936 [2024-07-15 11:43:03.952687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.936 [2024-07-15 11:43:03.952708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.936 [2024-07-15 11:43:03.966271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.936 [2024-07-15 11:43:03.966290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.936 [2024-07-15 11:43:03.979713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.936 [2024-07-15 11:43:03.979733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.936 [2024-07-15 11:43:03.993343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.936 [2024-07-15 11:43:03.993363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.936 [2024-07-15 11:43:04.006668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.936 [2024-07-15 11:43:04.006687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.936 [2024-07-15 11:43:04.020021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.936 [2024-07-15 11:43:04.020041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:35.936 [2024-07-15 11:43:04.033257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:35.936 [2024-07-15 11:43:04.033278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.195 [2024-07-15 11:43:04.046948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.195 [2024-07-15 11:43:04.046968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.195 [2024-07-15 11:43:04.060443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.195 [2024-07-15 11:43:04.060464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.195 [2024-07-15 11:43:04.073963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.195 [2024-07-15 11:43:04.073983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.195 [2024-07-15 11:43:04.087028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.195 [2024-07-15 11:43:04.087048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.195 [2024-07-15 11:43:04.100499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.195 [2024-07-15 11:43:04.100523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.195 [2024-07-15 11:43:04.113730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.195 [2024-07-15 11:43:04.113750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.195 [2024-07-15 11:43:04.127658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.195 [2024-07-15 11:43:04.127678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.195 [2024-07-15 11:43:04.141042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.195 [2024-07-15 11:43:04.141061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.195 [2024-07-15 11:43:04.154670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.195 [2024-07-15 11:43:04.154689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.195 [2024-07-15 11:43:04.168387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.195 [2024-07-15 11:43:04.168407] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.195 [2024-07-15 11:43:04.181766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.195 [2024-07-15 11:43:04.181786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.195 [2024-07-15 11:43:04.195001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.195 [2024-07-15 11:43:04.195020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.195 [2024-07-15 11:43:04.208680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.195 [2024-07-15 11:43:04.208700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.195 [2024-07-15 11:43:04.222354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.195 [2024-07-15 11:43:04.222374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.195 [2024-07-15 11:43:04.235951] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.195 [2024-07-15 11:43:04.235971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.195 [2024-07-15 11:43:04.249556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.195 [2024-07-15 11:43:04.249575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.195 [2024-07-15 11:43:04.263095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.195 [2024-07-15 11:43:04.263115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.195 [2024-07-15 11:43:04.276636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.195 [2024-07-15 11:43:04.276656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.195 [2024-07-15 11:43:04.290314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.195 [2024-07-15 11:43:04.290333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.469 [2024-07-15 11:43:04.303785] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.469 [2024-07-15 11:43:04.303806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.469 [2024-07-15 11:43:04.316989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.469 [2024-07-15 11:43:04.317010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.469 [2024-07-15 11:43:04.330446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.469 [2024-07-15 11:43:04.330466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.469 [2024-07-15 11:43:04.344013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.469 [2024-07-15 11:43:04.344033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.469 [2024-07-15 11:43:04.357566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.469 [2024-07-15 11:43:04.357586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.469 [2024-07-15 11:43:04.371108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.469 [2024-07-15 11:43:04.371127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.469 [2024-07-15 11:43:04.384839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.470 [2024-07-15 11:43:04.384874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.470 [2024-07-15 11:43:04.398184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.470 [2024-07-15 11:43:04.398204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.470 [2024-07-15 11:43:04.411881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.470 [2024-07-15 11:43:04.411900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.470 [2024-07-15 11:43:04.425111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.470 [2024-07-15 11:43:04.425131] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.470 [2024-07-15 11:43:04.438253] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.470 [2024-07-15 11:43:04.438274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.470 [2024-07-15 11:43:04.451333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.470 [2024-07-15 11:43:04.451354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.470 [2024-07-15 11:43:04.464243] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.470 [2024-07-15 11:43:04.464264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.470 [2024-07-15 11:43:04.477361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.470 [2024-07-15 11:43:04.477382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.470 [2024-07-15 11:43:04.490830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.470 [2024-07-15 11:43:04.490873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.470 [2024-07-15 11:43:04.504361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.470 [2024-07-15 11:43:04.504383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.470 [2024-07-15 11:43:04.517890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.470 [2024-07-15 11:43:04.517912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.470 [2024-07-15 11:43:04.532191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.470 [2024-07-15 11:43:04.532213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.470 [2024-07-15 11:43:04.545697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.470 [2024-07-15 11:43:04.545717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.470 [2024-07-15 11:43:04.558843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.470 [2024-07-15 11:43:04.558863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.470 [2024-07-15 11:43:04.572441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.470 [2024-07-15 11:43:04.572462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.729 [2024-07-15 11:43:04.585839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.729 [2024-07-15 11:43:04.585859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.729 [2024-07-15 11:43:04.599183] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.729 [2024-07-15 11:43:04.599204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.729 [2024-07-15 11:43:04.613361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.729 [2024-07-15 11:43:04.613382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.729 [2024-07-15 11:43:04.624807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.729 [2024-07-15 11:43:04.624828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.729 [2024-07-15 11:43:04.638603] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.729 [2024-07-15 11:43:04.638623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.729 [2024-07-15 11:43:04.652078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.729 [2024-07-15 11:43:04.652099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.729 [2024-07-15 11:43:04.665511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.729 [2024-07-15 11:43:04.665532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.729 [2024-07-15 11:43:04.678810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.729 [2024-07-15 11:43:04.678831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.729 [2024-07-15 11:43:04.692447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.729 [2024-07-15 11:43:04.692468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.729 [2024-07-15 11:43:04.705838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.729 [2024-07-15 11:43:04.705859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.729 [2024-07-15 11:43:04.719514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.729 [2024-07-15 11:43:04.719535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.729 [2024-07-15 11:43:04.733213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.729 [2024-07-15 11:43:04.733233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.729 [2024-07-15 11:43:04.746895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.729 [2024-07-15 11:43:04.746916] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.729 [2024-07-15 11:43:04.760112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.729 [2024-07-15 11:43:04.760132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.729 [2024-07-15 11:43:04.773426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.729 [2024-07-15 11:43:04.773447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.729 [2024-07-15 11:43:04.787298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.729 [2024-07-15 11:43:04.787318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.729 [2024-07-15 11:43:04.800573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.729 [2024-07-15 11:43:04.800592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.729 [2024-07-15 11:43:04.814058] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.729 [2024-07-15 11:43:04.814078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.729 [2024-07-15 11:43:04.827392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.729 [2024-07-15 11:43:04.827412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.989 [2024-07-15 11:43:04.841316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.989 [2024-07-15 11:43:04.841336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.989 [2024-07-15 11:43:04.854739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.989 [2024-07-15 11:43:04.854760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.989 [2024-07-15 11:43:04.868281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.989 [2024-07-15 11:43:04.868302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.989 [2024-07-15 11:43:04.881716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.989 [2024-07-15 11:43:04.881736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.989 [2024-07-15 11:43:04.894999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.989 [2024-07-15 11:43:04.895019] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.989 [2024-07-15 11:43:04.908695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.989 [2024-07-15 11:43:04.908717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.989 [2024-07-15 11:43:04.922381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.989 [2024-07-15 11:43:04.922401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.989 [2024-07-15 11:43:04.935491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.989 [2024-07-15 11:43:04.935512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.989 [2024-07-15 11:43:04.949341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.989 [2024-07-15 11:43:04.949364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.989 [2024-07-15 11:43:04.962667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.989 [2024-07-15 11:43:04.962688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.989 [2024-07-15 11:43:04.975949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.989 [2024-07-15 11:43:04.975970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.989 [2024-07-15 11:43:04.989197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.989 [2024-07-15 11:43:04.989218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.989 [2024-07-15 11:43:05.002370] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.989 [2024-07-15 11:43:05.002391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.989 [2024-07-15 11:43:05.016208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.989 [2024-07-15 11:43:05.016228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.989 [2024-07-15 11:43:05.030802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.989 [2024-07-15 11:43:05.030822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.989 [2024-07-15 11:43:05.045502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.989 [2024-07-15 11:43:05.045522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.989 [2024-07-15 11:43:05.058417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.989 [2024-07-15 11:43:05.058437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.989 [2024-07-15 11:43:05.072196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.989 [2024-07-15 11:43:05.072217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:36.989 [2024-07-15 11:43:05.085389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:36.989 [2024-07-15 11:43:05.085408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.249 [2024-07-15 11:43:05.098955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.249 [2024-07-15 11:43:05.098975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.249 [2024-07-15 11:43:05.112476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.249 [2024-07-15 11:43:05.112496] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.249 [2024-07-15 11:43:05.125840] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.249 [2024-07-15 11:43:05.125860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.249 [2024-07-15 11:43:05.139450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.249 [2024-07-15 11:43:05.139470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.249 [2024-07-15 11:43:05.153591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.249 [2024-07-15 11:43:05.153610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.249 [2024-07-15 11:43:05.169529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.249 [2024-07-15 11:43:05.169549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.249 [2024-07-15 11:43:05.182848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.249 [2024-07-15 11:43:05.182883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.249 [2024-07-15 11:43:05.196581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.249 [2024-07-15 11:43:05.196601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.249 [2024-07-15 11:43:05.211180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.249 [2024-07-15 11:43:05.211200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.249 [2024-07-15 11:43:05.223144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.249 [2024-07-15 11:43:05.223165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.249 [2024-07-15 11:43:05.236684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.249 [2024-07-15 11:43:05.236704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.249 [2024-07-15 11:43:05.250018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.249 [2024-07-15 11:43:05.250038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.249 [2024-07-15 11:43:05.264915] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.249 [2024-07-15 11:43:05.264935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.249 [2024-07-15 11:43:05.279883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.249 [2024-07-15 11:43:05.279903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.249 [2024-07-15 11:43:05.293168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.249 [2024-07-15 11:43:05.293188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.249 [2024-07-15 11:43:05.307015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.249 [2024-07-15 11:43:05.307034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.249 [2024-07-15 11:43:05.320084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.249 [2024-07-15 11:43:05.320104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.249 [2024-07-15 11:43:05.333635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.249 [2024-07-15 11:43:05.333656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.249 [2024-07-15 11:43:05.347234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.249 [2024-07-15 11:43:05.347255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.508 [2024-07-15 11:43:05.360707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.508 [2024-07-15 11:43:05.360727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.508 [2024-07-15 11:43:05.374144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.508 [2024-07-15 11:43:05.374168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.508 [2024-07-15 11:43:05.387612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.508 [2024-07-15 11:43:05.387632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.508 [2024-07-15 11:43:05.400978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.508 [2024-07-15 11:43:05.401008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.508 [2024-07-15 11:43:05.414505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.508 [2024-07-15 11:43:05.414525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.508 [2024-07-15 11:43:05.427923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.508 [2024-07-15 11:43:05.427943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.508 [2024-07-15 11:43:05.441429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.508 [2024-07-15 11:43:05.441449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.508 [2024-07-15 11:43:05.454782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.508 [2024-07-15 11:43:05.454802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.508 [2024-07-15 11:43:05.468227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.508 [2024-07-15 11:43:05.468246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.508 [2024-07-15 11:43:05.481629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.508 [2024-07-15 11:43:05.481648] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.508 [2024-07-15 11:43:05.494897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.508 [2024-07-15 11:43:05.494917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.508 [2024-07-15 11:43:05.508249] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.508 [2024-07-15 11:43:05.508269] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.508 [2024-07-15 11:43:05.521796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.508 [2024-07-15 11:43:05.521817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.508 [2024-07-15 11:43:05.535118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.508 [2024-07-15 11:43:05.535137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.508 [2024-07-15 11:43:05.548634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.508 [2024-07-15 11:43:05.548654] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.508 [2024-07-15 11:43:05.562250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.508 [2024-07-15 11:43:05.562271] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.508 [2024-07-15 11:43:05.575629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.508 [2024-07-15 11:43:05.575649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.508 [2024-07-15 11:43:05.588841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.508 [2024-07-15 11:43:05.588861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.508 [2024-07-15 11:43:05.602230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.508 [2024-07-15 11:43:05.602250] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.768 [2024-07-15 11:43:05.615913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.768 [2024-07-15 11:43:05.615933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.768 [2024-07-15 11:43:05.629569] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.768 [2024-07-15 11:43:05.629592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.768 [2024-07-15 11:43:05.643075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.768 [2024-07-15 11:43:05.643095] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.768 [2024-07-15 11:43:05.656458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.768 [2024-07-15 11:43:05.656478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.768 [2024-07-15 11:43:05.670279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.768 [2024-07-15 11:43:05.670300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.768 [2024-07-15 11:43:05.683889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.768 [2024-07-15 11:43:05.683909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.768 [2024-07-15 11:43:05.697142] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.768 [2024-07-15 11:43:05.697162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.768 [2024-07-15 11:43:05.710412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.768 [2024-07-15 11:43:05.710432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.768 [2024-07-15 11:43:05.723807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.768 [2024-07-15 11:43:05.723826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.768 [2024-07-15 11:43:05.737067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.768 [2024-07-15 11:43:05.737086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.768 [2024-07-15 11:43:05.750217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.768 [2024-07-15 11:43:05.750237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.768 [2024-07-15 11:43:05.763762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.768 [2024-07-15 11:43:05.763782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.768 [2024-07-15 11:43:05.777292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.768 [2024-07-15 11:43:05.777311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.768 [2024-07-15 11:43:05.791121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.768 [2024-07-15 11:43:05.791141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.768 [2024-07-15 11:43:05.806601] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.768 [2024-07-15 11:43:05.806621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.768 [2024-07-15 11:43:05.820557] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.768 [2024-07-15 11:43:05.820577] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.768 [2024-07-15 11:43:05.834038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.768 [2024-07-15 11:43:05.834058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.768 [2024-07-15 11:43:05.847539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.768 [2024-07-15 11:43:05.847559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:37.768 [2024-07-15 11:43:05.861036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:37.768 [2024-07-15 11:43:05.861055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.028 [2024-07-15 11:43:05.875026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.028 [2024-07-15 11:43:05.875046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.028 [2024-07-15 11:43:05.886182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.028 [2024-07-15 11:43:05.886207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.028 [2024-07-15 11:43:05.900124] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.028 [2024-07-15 11:43:05.900144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.028 [2024-07-15 11:43:05.913580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.028 [2024-07-15 11:43:05.913600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.028 [2024-07-15 11:43:05.927129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.028 [2024-07-15 11:43:05.927149] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.028 [2024-07-15 11:43:05.940986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.028 [2024-07-15 11:43:05.941006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.028 [2024-07-15 11:43:05.954277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.028 [2024-07-15 11:43:05.954297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.028 [2024-07-15 11:43:05.967741] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.028 [2024-07-15 11:43:05.967763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.028 [2024-07-15 11:43:05.981482] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.028 [2024-07-15 11:43:05.981503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.028 [2024-07-15 11:43:05.995297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.028 [2024-07-15 11:43:05.995317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.028 [2024-07-15 11:43:06.008784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.028 [2024-07-15 11:43:06.008805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.028 [2024-07-15 11:43:06.022433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.028 [2024-07-15 11:43:06.022453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.028 [2024-07-15 11:43:06.035928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.028 [2024-07-15 11:43:06.035949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.028 [2024-07-15 11:43:06.049310] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.028 [2024-07-15 11:43:06.049331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.028 [2024-07-15 11:43:06.062450] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.028 [2024-07-15 11:43:06.062471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.028 [2024-07-15 11:43:06.075903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.028 [2024-07-15 11:43:06.075925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.028 [2024-07-15 11:43:06.089727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.028 [2024-07-15 11:43:06.089747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.028 [2024-07-15 11:43:06.100921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.028 [2024-07-15 11:43:06.100941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.028 [2024-07-15 11:43:06.115224] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.028 [2024-07-15 11:43:06.115244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.028 [2024-07-15 11:43:06.128856] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.028 [2024-07-15 11:43:06.128876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.288 [2024-07-15 11:43:06.142153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.288 [2024-07-15 11:43:06.142177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.288 [2024-07-15 11:43:06.155844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.288 [2024-07-15 11:43:06.155865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.288 [2024-07-15 11:43:06.169967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.288 [2024-07-15 11:43:06.169987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.288 [2024-07-15 11:43:06.185415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.288 [2024-07-15 11:43:06.185436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.288 [2024-07-15 11:43:06.199361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.288 [2024-07-15 11:43:06.199381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.288 [2024-07-15 11:43:06.212727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.288 [2024-07-15 11:43:06.212748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.288 [2024-07-15 11:43:06.226260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.288 [2024-07-15 11:43:06.226280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.288 [2024-07-15 11:43:06.240018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.288 [2024-07-15 11:43:06.240038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.288 [2024-07-15 11:43:06.253312] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.288 [2024-07-15 11:43:06.253332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.288 [2024-07-15 11:43:06.266961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.288 [2024-07-15 11:43:06.266981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.288 [2024-07-15 11:43:06.280088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.288 [2024-07-15 11:43:06.280108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.288 [2024-07-15 11:43:06.293354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.288 [2024-07-15 11:43:06.293375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.288 [2024-07-15 11:43:06.306838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.288 [2024-07-15 11:43:06.306858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.288 [2024-07-15 11:43:06.320184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.288 [2024-07-15 11:43:06.320205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.288 [2024-07-15 11:43:06.333890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.288 [2024-07-15 11:43:06.333911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.288 [2024-07-15 11:43:06.347289] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.288 [2024-07-15 11:43:06.347310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.288 [2024-07-15 11:43:06.360797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.288 [2024-07-15 11:43:06.360818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.288 [2024-07-15 11:43:06.374273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.288 [2024-07-15 11:43:06.374293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.288 [2024-07-15 11:43:06.387688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.288 [2024-07-15 11:43:06.387708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.547 [2024-07-15 11:43:06.401218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.547 [2024-07-15 11:43:06.401238] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.547 [2024-07-15 11:43:06.414292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.547 [2024-07-15 11:43:06.414312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.547 [2024-07-15 11:43:06.425441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.547 [2024-07-15 11:43:06.425461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.548 00:16:38.548 Latency(us) 00:16:38.548 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:38.548 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:38.548 Nvme1n1 : 5.01 17464.93 136.44 0.00 0.00 7321.73 3303.01 19922.94 00:16:38.548 =================================================================================================================== 00:16:38.548 Total : 17464.93 136.44 0.00 0.00 7321.73 3303.01 19922.94 00:16:38.548 [2024-07-15 11:43:06.436486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.548 [2024-07-15 11:43:06.436503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.548 [2024-07-15 11:43:06.448518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.548 [2024-07-15 11:43:06.448532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.548 [2024-07-15 11:43:06.460553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.548 [2024-07-15 11:43:06.460572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.548 [2024-07-15 11:43:06.472581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.548 [2024-07-15 11:43:06.472597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.548 [2024-07-15 11:43:06.484609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.548 [2024-07-15 11:43:06.484623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.548 [2024-07-15 11:43:06.496640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.548 [2024-07-15 11:43:06.496655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.548 [2024-07-15 11:43:06.508670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.548 [2024-07-15 11:43:06.508683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.548 [2024-07-15 11:43:06.520702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.548 [2024-07-15 11:43:06.520715] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.548 [2024-07-15 11:43:06.532737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.548 [2024-07-15 11:43:06.532754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.548 [2024-07-15 11:43:06.544763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.548 [2024-07-15 11:43:06.544775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.548 [2024-07-15 11:43:06.556797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.548 [2024-07-15 11:43:06.556810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.548 [2024-07-15 11:43:06.568827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.548 [2024-07-15 11:43:06.568845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.548 [2024-07-15 11:43:06.580865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.548 [2024-07-15 11:43:06.580877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.548 [2024-07-15 11:43:06.592895] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.548 [2024-07-15 11:43:06.592907] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.548 [2024-07-15 11:43:06.604925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:38.548 [2024-07-15 11:43:06.604937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:38.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1944433) - No such process 00:16:38.548 11:43:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1944433 00:16:38.548 11:43:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:38.548 11:43:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.548 11:43:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:38.548 11:43:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.548 11:43:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:38.548 11:43:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.548 11:43:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:38.548 delay0 00:16:38.548 11:43:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.548 11:43:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:38.548 11:43:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.548 11:43:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:38.548 11:43:06 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.548 11:43:06 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:38.807 EAL: No free 2048 kB hugepages reported on node 1 00:16:38.807 [2024-07-15 11:43:06.690817] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:45.371 [2024-07-15 11:43:12.821025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5d9f40 is same with the state(5) to be set 00:16:45.371 Initializing NVMe Controllers 00:16:45.371 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:45.371 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:45.371 Initialization complete. Launching workers. 00:16:45.371 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 119 00:16:45.371 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 409, failed to submit 30 00:16:45.371 success 219, unsuccess 190, failed 0 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:45.371 rmmod nvme_tcp 00:16:45.371 rmmod nvme_fabrics 00:16:45.371 rmmod nvme_keyring 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1942561 ']' 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1942561 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1942561 ']' 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1942561 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1942561 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1942561' 00:16:45.371 killing process with pid 1942561 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1942561 00:16:45.371 11:43:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1942561 00:16:45.371 11:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:45.371 11:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:45.371 11:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:45.371 11:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:45.371 11:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:45.371 11:43:13 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.371 11:43:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.371 11:43:13 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.340 11:43:15 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:47.340 00:16:47.340 real 0m32.686s 00:16:47.340 user 0m41.956s 00:16:47.340 sys 0m13.253s 00:16:47.340 11:43:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:47.340 11:43:15 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:47.340 ************************************ 00:16:47.340 END TEST nvmf_zcopy 00:16:47.340 ************************************ 00:16:47.340 11:43:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:47.340 11:43:15 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:47.340 11:43:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:47.340 11:43:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:47.340 11:43:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:47.340 ************************************ 00:16:47.340 START TEST nvmf_nmic 00:16:47.340 ************************************ 00:16:47.340 11:43:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:47.340 * Looking for test storage... 00:16:47.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:47.340 11:43:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:47.340 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:16:47.340 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.340 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.340 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.340 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.340 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:47.340 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:47.340 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.340 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:47.340 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.340 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:47.340 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:16:47.340 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:16:47.340 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.340 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:16:47.341 11:43:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:53.910 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:53.910 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:53.910 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:53.911 Found net devices under 0000:af:00.0: cvl_0_0 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:53.911 Found net devices under 0000:af:00.1: cvl_0_1 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:53.911 11:43:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:54.170 11:43:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:54.170 11:43:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:54.170 11:43:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:54.170 11:43:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:54.170 11:43:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:54.170 11:43:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:54.170 11:43:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:54.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:16:54.170 00:16:54.170 --- 10.0.0.2 ping statistics --- 00:16:54.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.170 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:16:54.170 11:43:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:54.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:16:54.170 00:16:54.170 --- 10.0.0.1 ping statistics --- 00:16:54.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.170 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:16:54.170 11:43:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.170 11:43:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:16:54.170 11:43:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:54.170 11:43:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.170 11:43:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:54.170 11:43:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:54.170 11:43:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.170 11:43:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:54.170 11:43:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:54.429 11:43:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:54.430 11:43:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:54.430 11:43:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:54.430 11:43:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:54.430 11:43:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1950201 00:16:54.430 11:43:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:54.430 11:43:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1950201 00:16:54.430 11:43:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1950201 ']' 00:16:54.430 11:43:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.430 11:43:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:54.430 11:43:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.430 11:43:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:54.430 11:43:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:54.430 [2024-07-15 11:43:22.353126] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:16:54.430 [2024-07-15 11:43:22.353172] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.430 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.430 [2024-07-15 11:43:22.426008] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:54.430 [2024-07-15 11:43:22.498200] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.430 [2024-07-15 11:43:22.498242] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.430 [2024-07-15 11:43:22.498252] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:54.430 [2024-07-15 11:43:22.498261] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:54.430 [2024-07-15 11:43:22.498268] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.430 [2024-07-15 11:43:22.498320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.430 [2024-07-15 11:43:22.498414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.430 [2024-07-15 11:43:22.498500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:54.430 [2024-07-15 11:43:22.498502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.367 [2024-07-15 11:43:23.203616] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.367 Malloc0 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.367 [2024-07-15 11:43:23.258393] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:55.367 test case1: single bdev can't be used in multiple subsystems 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.367 [2024-07-15 11:43:23.282276] bdev.c:8104:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:55.367 [2024-07-15 11:43:23.282298] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:55.367 [2024-07-15 11:43:23.282308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.367 request: 00:16:55.367 { 00:16:55.367 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:55.367 "namespace": { 00:16:55.367 "bdev_name": "Malloc0", 00:16:55.367 "no_auto_visible": false 00:16:55.367 }, 00:16:55.367 "method": "nvmf_subsystem_add_ns", 00:16:55.367 "req_id": 1 00:16:55.367 } 00:16:55.367 Got JSON-RPC error response 00:16:55.367 response: 00:16:55.367 { 00:16:55.367 "code": -32602, 00:16:55.367 "message": "Invalid parameters" 00:16:55.367 } 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:55.367 Adding namespace failed - expected result. 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:55.367 test case2: host connect to nvmf target in multiple paths 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:55.367 [2024-07-15 11:43:23.298443] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.367 11:43:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:56.744 11:43:24 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:58.119 11:43:26 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:58.119 11:43:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:16:58.119 11:43:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:58.119 11:43:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:58.119 11:43:26 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:17:00.083 11:43:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:00.083 11:43:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:00.083 11:43:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:00.083 11:43:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:00.083 11:43:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:00.083 11:43:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:17:00.083 11:43:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:00.083 [global] 00:17:00.083 thread=1 00:17:00.083 invalidate=1 00:17:00.083 rw=write 00:17:00.083 time_based=1 00:17:00.083 runtime=1 00:17:00.083 ioengine=libaio 00:17:00.083 direct=1 00:17:00.083 bs=4096 00:17:00.083 iodepth=1 00:17:00.083 norandommap=0 00:17:00.083 numjobs=1 00:17:00.083 00:17:00.083 verify_dump=1 00:17:00.083 verify_backlog=512 00:17:00.083 verify_state_save=0 00:17:00.083 do_verify=1 00:17:00.083 verify=crc32c-intel 00:17:00.083 [job0] 00:17:00.083 filename=/dev/nvme0n1 00:17:00.083 Could not set queue depth (nvme0n1) 00:17:00.362 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:00.362 fio-3.35 00:17:00.362 Starting 1 thread 00:17:01.741 00:17:01.741 job0: (groupid=0, jobs=1): err= 0: pid=1951413: Mon Jul 15 11:43:29 2024 00:17:01.741 read: IOPS=20, BW=83.1KiB/s (85.1kB/s)(84.0KiB/1011msec) 00:17:01.741 slat (nsec): min=11547, max=28408, avg=24250.71, stdev=3079.16 00:17:01.741 clat (usec): min=40866, max=41801, avg=41043.01, stdev=246.42 00:17:01.741 lat (usec): min=40890, max=41826, avg=41067.26, stdev=244.66 00:17:01.741 clat percentiles (usec): 00:17:01.741 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:17:01.741 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:01.741 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:17:01.741 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:17:01.741 | 99.99th=[41681] 00:17:01.741 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:17:01.741 slat (usec): min=12, max=24596, avg=61.54, stdev=1086.43 00:17:01.741 clat (usec): min=187, max=510, avg=224.18, stdev=26.07 00:17:01.741 lat (usec): min=211, max=25106, avg=285.72, stdev=1099.36 00:17:01.741 clat percentiles (usec): 00:17:01.741 | 1.00th=[ 200], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 208], 00:17:01.741 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:17:01.741 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 258], 95.00th=[ 262], 00:17:01.741 | 99.00th=[ 273], 99.50th=[ 367], 99.90th=[ 510], 99.95th=[ 510], 00:17:01.741 | 99.99th=[ 510] 00:17:01.741 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:17:01.741 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:01.741 lat (usec) : 250=79.74%, 500=16.14%, 750=0.19% 00:17:01.741 lat (msec) : 50=3.94% 00:17:01.741 cpu : usr=0.40%, sys=1.09%, ctx=536, majf=0, minf=2 00:17:01.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:01.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:01.741 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:01.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:01.741 00:17:01.741 Run status group 0 (all jobs): 00:17:01.741 READ: bw=83.1KiB/s (85.1kB/s), 83.1KiB/s-83.1KiB/s (85.1kB/s-85.1kB/s), io=84.0KiB (86.0kB), run=1011-1011msec 00:17:01.741 WRITE: bw=2026KiB/s (2074kB/s), 2026KiB/s-2026KiB/s (2074kB/s-2074kB/s), io=2048KiB (2097kB), run=1011-1011msec 00:17:01.741 00:17:01.741 Disk stats (read/write): 00:17:01.741 nvme0n1: ios=70/512, merge=0/0, ticks=1721/105, in_queue=1826, util=98.70% 00:17:01.741 11:43:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:01.741 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:01.741 11:43:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:01.741 11:43:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:17:01.741 11:43:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:01.741 11:43:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:01.742 11:43:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:01.742 11:43:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:01.742 11:43:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:17:01.742 11:43:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:01.742 11:43:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:17:01.742 11:43:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:01.742 11:43:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:17:01.742 11:43:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:01.742 11:43:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:17:01.742 11:43:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:01.742 11:43:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:01.742 rmmod nvme_tcp 00:17:01.742 rmmod nvme_fabrics 00:17:01.742 rmmod nvme_keyring 00:17:02.001 11:43:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:02.001 11:43:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:17:02.001 11:43:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:17:02.001 11:43:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1950201 ']' 00:17:02.001 11:43:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1950201 00:17:02.001 11:43:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1950201 ']' 00:17:02.001 11:43:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1950201 00:17:02.001 11:43:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:17:02.001 11:43:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:02.002 11:43:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1950201 00:17:02.002 11:43:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:02.002 11:43:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:02.002 11:43:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1950201' 00:17:02.002 killing process with pid 1950201 00:17:02.002 11:43:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1950201 00:17:02.002 11:43:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1950201 00:17:02.261 11:43:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:02.261 11:43:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:02.261 11:43:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:02.261 11:43:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:02.261 11:43:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:02.261 11:43:30 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.261 11:43:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:02.261 11:43:30 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.168 11:43:32 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:04.168 00:17:04.168 real 0m16.945s 00:17:04.168 user 0m40.640s 00:17:04.168 sys 0m6.303s 00:17:04.168 11:43:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:04.168 11:43:32 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:04.168 ************************************ 00:17:04.168 END TEST nvmf_nmic 00:17:04.168 ************************************ 00:17:04.168 11:43:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:04.428 11:43:32 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:04.428 11:43:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:04.428 11:43:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:04.428 11:43:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:04.428 ************************************ 00:17:04.428 START TEST nvmf_fio_target 00:17:04.428 ************************************ 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:04.428 * Looking for test storage... 00:17:04.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:17:04.428 11:43:32 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:04.429 11:43:32 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:12.553 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:12.553 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:12.553 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:12.554 Found net devices under 0000:af:00.0: cvl_0_0 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:12.554 Found net devices under 0000:af:00.1: cvl_0_1 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:12.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:12.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:17:12.554 00:17:12.554 --- 10.0.0.2 ping statistics --- 00:17:12.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.554 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:12.554 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:12.554 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:17:12.554 00:17:12.554 --- 10.0.0.1 ping statistics --- 00:17:12.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.554 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1955375 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1955375 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1955375 ']' 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.554 11:43:39 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:12.554 [2024-07-15 11:43:39.609957] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:17:12.554 [2024-07-15 11:43:39.610007] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:12.554 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.554 [2024-07-15 11:43:39.684595] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:12.554 [2024-07-15 11:43:39.759623] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:12.554 [2024-07-15 11:43:39.759661] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:12.554 [2024-07-15 11:43:39.759670] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:12.554 [2024-07-15 11:43:39.759679] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:12.554 [2024-07-15 11:43:39.759686] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:12.554 [2024-07-15 11:43:39.759732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.554 [2024-07-15 11:43:39.759750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:12.554 [2024-07-15 11:43:39.759848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:12.554 [2024-07-15 11:43:39.759852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.554 11:43:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:12.554 11:43:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:17:12.555 11:43:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:12.555 11:43:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:12.555 11:43:40 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.555 11:43:40 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.555 11:43:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:12.555 [2024-07-15 11:43:40.626197] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:12.814 11:43:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:12.814 11:43:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:12.814 11:43:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:13.073 11:43:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:13.073 11:43:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:13.332 11:43:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:13.332 11:43:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:13.590 11:43:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:13.590 11:43:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:13.590 11:43:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:13.848 11:43:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:13.848 11:43:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:14.107 11:43:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:14.107 11:43:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:14.364 11:43:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:14.364 11:43:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:14.364 11:43:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:14.622 11:43:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:14.622 11:43:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:14.881 11:43:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:14.881 11:43:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:14.881 11:43:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:15.139 [2024-07-15 11:43:43.114760] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.139 11:43:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:15.397 11:43:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:15.654 11:43:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:17.065 11:43:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:17.065 11:43:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:17:17.065 11:43:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:17.065 11:43:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:17:17.065 11:43:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:17:17.065 11:43:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:17:18.971 11:43:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:18.971 11:43:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:18.971 11:43:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:18.971 11:43:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:17:18.971 11:43:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:18.971 11:43:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:17:18.971 11:43:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:18.971 [global] 00:17:18.971 thread=1 00:17:18.971 invalidate=1 00:17:18.971 rw=write 00:17:18.971 time_based=1 00:17:18.971 runtime=1 00:17:18.971 ioengine=libaio 00:17:18.971 direct=1 00:17:18.971 bs=4096 00:17:18.971 iodepth=1 00:17:18.971 norandommap=0 00:17:18.971 numjobs=1 00:17:18.971 00:17:18.971 verify_dump=1 00:17:18.971 verify_backlog=512 00:17:18.971 verify_state_save=0 00:17:18.971 do_verify=1 00:17:18.971 verify=crc32c-intel 00:17:18.971 [job0] 00:17:18.971 filename=/dev/nvme0n1 00:17:18.971 [job1] 00:17:18.971 filename=/dev/nvme0n2 00:17:18.972 [job2] 00:17:18.972 filename=/dev/nvme0n3 00:17:18.972 [job3] 00:17:18.972 filename=/dev/nvme0n4 00:17:18.972 Could not set queue depth (nvme0n1) 00:17:18.972 Could not set queue depth (nvme0n2) 00:17:18.972 Could not set queue depth (nvme0n3) 00:17:18.972 Could not set queue depth (nvme0n4) 00:17:19.231 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:19.231 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:19.231 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:19.231 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:19.231 fio-3.35 00:17:19.231 Starting 4 threads 00:17:20.608 00:17:20.608 job0: (groupid=0, jobs=1): err= 0: pid=1956884: Mon Jul 15 11:43:48 2024 00:17:20.608 read: IOPS=1263, BW=5055KiB/s (5176kB/s)(5060KiB/1001msec) 00:17:20.608 slat (nsec): min=8269, max=31139, avg=8749.55, stdev=938.14 00:17:20.608 clat (usec): min=352, max=637, avg=466.69, stdev=32.86 00:17:20.608 lat (usec): min=361, max=645, avg=475.44, stdev=32.85 00:17:20.608 clat percentiles (usec): 00:17:20.608 | 1.00th=[ 367], 5.00th=[ 392], 10.00th=[ 412], 20.00th=[ 453], 00:17:20.608 | 30.00th=[ 465], 40.00th=[ 474], 50.00th=[ 478], 60.00th=[ 482], 00:17:20.608 | 70.00th=[ 486], 80.00th=[ 490], 90.00th=[ 494], 95.00th=[ 502], 00:17:20.608 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 545], 99.95th=[ 635], 00:17:20.608 | 99.99th=[ 635] 00:17:20.608 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:17:20.608 slat (usec): min=7, max=40213, avg=38.29, stdev=1025.76 00:17:20.608 clat (usec): min=168, max=4066, avg=217.38, stdev=140.36 00:17:20.608 lat (usec): min=180, max=40623, avg=255.67, stdev=1040.18 00:17:20.608 clat percentiles (usec): 00:17:20.608 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 182], 20.00th=[ 188], 00:17:20.608 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 210], 00:17:20.608 | 70.00th=[ 219], 80.00th=[ 245], 90.00th=[ 258], 95.00th=[ 281], 00:17:20.608 | 99.00th=[ 310], 99.50th=[ 363], 99.90th=[ 3916], 99.95th=[ 4080], 00:17:20.608 | 99.99th=[ 4080] 00:17:20.608 bw ( KiB/s): min= 6704, max= 6704, per=56.63%, avg=6704.00, stdev= 0.00, samples=1 00:17:20.608 iops : min= 1676, max= 1676, avg=1676.00, stdev= 0.00, samples=1 00:17:20.608 lat (usec) : 250=46.77%, 500=50.95%, 750=2.21% 00:17:20.608 lat (msec) : 4=0.04%, 10=0.04% 00:17:20.608 cpu : usr=1.90%, sys=3.00%, ctx=2804, majf=0, minf=2 00:17:20.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:20.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.608 issued rwts: total=1265,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:20.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:20.609 job1: (groupid=0, jobs=1): err= 0: pid=1956903: Mon Jul 15 11:43:48 2024 00:17:20.609 read: IOPS=21, BW=85.3KiB/s (87.3kB/s)(88.0KiB/1032msec) 00:17:20.609 slat (nsec): min=10301, max=25405, avg=12489.32, stdev=4534.51 00:17:20.609 clat (usec): min=40900, max=42018, avg=41162.55, stdev=393.28 00:17:20.609 lat (usec): min=40924, max=42043, avg=41175.04, stdev=394.25 00:17:20.609 clat percentiles (usec): 00:17:20.609 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:17:20.609 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:20.609 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:17:20.609 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:20.609 | 99.99th=[42206] 00:17:20.609 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:17:20.609 slat (nsec): min=7604, max=33152, avg=12125.71, stdev=2469.59 00:17:20.609 clat (usec): min=170, max=423, avg=232.42, stdev=30.65 00:17:20.609 lat (usec): min=177, max=437, avg=244.54, stdev=31.94 00:17:20.609 clat percentiles (usec): 00:17:20.609 | 1.00th=[ 176], 5.00th=[ 188], 10.00th=[ 198], 20.00th=[ 208], 00:17:20.609 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 231], 60.00th=[ 241], 00:17:20.609 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 265], 95.00th=[ 273], 00:17:20.609 | 99.00th=[ 326], 99.50th=[ 392], 99.90th=[ 424], 99.95th=[ 424], 00:17:20.609 | 99.99th=[ 424] 00:17:20.609 bw ( KiB/s): min= 4096, max= 4096, per=34.60%, avg=4096.00, stdev= 0.00, samples=1 00:17:20.609 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:20.609 lat (usec) : 250=67.98%, 500=27.90% 00:17:20.609 lat (msec) : 50=4.12% 00:17:20.609 cpu : usr=0.19%, sys=0.78%, ctx=534, majf=0, minf=1 00:17:20.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:20.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.609 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:20.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:20.609 job2: (groupid=0, jobs=1): err= 0: pid=1956911: Mon Jul 15 11:43:48 2024 00:17:20.609 read: IOPS=21, BW=84.8KiB/s (86.8kB/s)(88.0KiB/1038msec) 00:17:20.609 slat (nsec): min=11047, max=25790, avg=24679.77, stdev=3083.92 00:17:20.609 clat (usec): min=40882, max=41976, avg=41068.94, stdev=296.34 00:17:20.609 lat (usec): min=40907, max=42002, avg=41093.62, stdev=296.22 00:17:20.609 clat percentiles (usec): 00:17:20.609 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:17:20.609 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:20.609 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:17:20.609 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:20.609 | 99.99th=[42206] 00:17:20.609 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:17:20.609 slat (nsec): min=11491, max=41701, avg=12409.91, stdev=1690.75 00:17:20.609 clat (usec): min=200, max=499, avg=247.26, stdev=25.72 00:17:20.609 lat (usec): min=213, max=540, avg=259.67, stdev=26.36 00:17:20.609 clat percentiles (usec): 00:17:20.609 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 221], 20.00th=[ 227], 00:17:20.609 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 253], 00:17:20.609 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 293], 00:17:20.609 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 498], 99.95th=[ 498], 00:17:20.609 | 99.99th=[ 498] 00:17:20.609 bw ( KiB/s): min= 4096, max= 4096, per=34.60%, avg=4096.00, stdev= 0.00, samples=1 00:17:20.609 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:20.609 lat (usec) : 250=54.87%, 500=41.01% 00:17:20.609 lat (msec) : 50=4.12% 00:17:20.609 cpu : usr=0.10%, sys=0.87%, ctx=534, majf=0, minf=1 00:17:20.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:20.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.609 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:20.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:20.609 job3: (groupid=0, jobs=1): err= 0: pid=1956912: Mon Jul 15 11:43:48 2024 00:17:20.609 read: IOPS=19, BW=78.0KiB/s (79.8kB/s)(80.0KiB/1026msec) 00:17:20.609 slat (nsec): min=11472, max=26416, avg=24966.20, stdev=3220.59 00:17:20.609 clat (usec): min=40780, max=41969, avg=41094.90, stdev=337.71 00:17:20.609 lat (usec): min=40806, max=41994, avg=41119.86, stdev=336.44 00:17:20.609 clat percentiles (usec): 00:17:20.609 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:17:20.609 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:20.609 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:17:20.609 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:20.609 | 99.99th=[42206] 00:17:20.609 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:17:20.609 slat (usec): min=12, max=40639, avg=171.82, stdev=2533.01 00:17:20.609 clat (usec): min=187, max=379, avg=223.25, stdev=25.57 00:17:20.609 lat (usec): min=200, max=41003, avg=395.08, stdev=2542.47 00:17:20.609 clat percentiles (usec): 00:17:20.609 | 1.00th=[ 192], 5.00th=[ 196], 10.00th=[ 198], 20.00th=[ 202], 00:17:20.609 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:17:20.609 | 70.00th=[ 231], 80.00th=[ 247], 90.00th=[ 260], 95.00th=[ 269], 00:17:20.609 | 99.00th=[ 285], 99.50th=[ 367], 99.90th=[ 379], 99.95th=[ 379], 00:17:20.609 | 99.99th=[ 379] 00:17:20.609 bw ( KiB/s): min= 4096, max= 4096, per=34.60%, avg=4096.00, stdev= 0.00, samples=1 00:17:20.609 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:20.609 lat (usec) : 250=79.51%, 500=16.73% 00:17:20.609 lat (msec) : 50=3.76% 00:17:20.609 cpu : usr=0.49%, sys=0.49%, ctx=535, majf=0, minf=1 00:17:20.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:20.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.609 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:20.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:20.609 00:17:20.609 Run status group 0 (all jobs): 00:17:20.609 READ: bw=5121KiB/s (5244kB/s), 78.0KiB/s-5055KiB/s (79.8kB/s-5176kB/s), io=5316KiB (5444kB), run=1001-1038msec 00:17:20.609 WRITE: bw=11.6MiB/s (12.1MB/s), 1973KiB/s-6138KiB/s (2020kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1038msec 00:17:20.609 00:17:20.609 Disk stats (read/write): 00:17:20.609 nvme0n1: ios=1049/1191, merge=0/0, ticks=1342/268, in_queue=1610, util=87.06% 00:17:20.609 nvme0n2: ios=66/512, merge=0/0, ticks=739/114, in_queue=853, util=88.40% 00:17:20.609 nvme0n3: ios=73/512, merge=0/0, ticks=751/128, in_queue=879, util=92.83% 00:17:20.609 nvme0n4: ios=39/512, merge=0/0, ticks=1523/113, in_queue=1636, util=98.59% 00:17:20.609 11:43:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:20.609 [global] 00:17:20.609 thread=1 00:17:20.609 invalidate=1 00:17:20.609 rw=randwrite 00:17:20.609 time_based=1 00:17:20.609 runtime=1 00:17:20.609 ioengine=libaio 00:17:20.609 direct=1 00:17:20.609 bs=4096 00:17:20.609 iodepth=1 00:17:20.609 norandommap=0 00:17:20.609 numjobs=1 00:17:20.609 00:17:20.609 verify_dump=1 00:17:20.609 verify_backlog=512 00:17:20.609 verify_state_save=0 00:17:20.609 do_verify=1 00:17:20.609 verify=crc32c-intel 00:17:20.609 [job0] 00:17:20.609 filename=/dev/nvme0n1 00:17:20.609 [job1] 00:17:20.609 filename=/dev/nvme0n2 00:17:20.609 [job2] 00:17:20.609 filename=/dev/nvme0n3 00:17:20.609 [job3] 00:17:20.609 filename=/dev/nvme0n4 00:17:20.609 Could not set queue depth (nvme0n1) 00:17:20.609 Could not set queue depth (nvme0n2) 00:17:20.609 Could not set queue depth (nvme0n3) 00:17:20.609 Could not set queue depth (nvme0n4) 00:17:20.868 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:20.868 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:20.868 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:20.868 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:20.868 fio-3.35 00:17:20.868 Starting 4 threads 00:17:22.247 00:17:22.247 job0: (groupid=0, jobs=1): err= 0: pid=1957329: Mon Jul 15 11:43:50 2024 00:17:22.247 read: IOPS=1177, BW=4711KiB/s (4824kB/s)(4716KiB/1001msec) 00:17:22.247 slat (nsec): min=8548, max=41533, avg=9422.86, stdev=1381.99 00:17:22.247 clat (usec): min=321, max=962, avg=460.49, stdev=48.88 00:17:22.247 lat (usec): min=330, max=973, avg=469.92, stdev=48.94 00:17:22.247 clat percentiles (usec): 00:17:22.247 | 1.00th=[ 338], 5.00th=[ 363], 10.00th=[ 400], 20.00th=[ 429], 00:17:22.247 | 30.00th=[ 441], 40.00th=[ 453], 50.00th=[ 465], 60.00th=[ 474], 00:17:22.247 | 70.00th=[ 486], 80.00th=[ 498], 90.00th=[ 515], 95.00th=[ 537], 00:17:22.247 | 99.00th=[ 562], 99.50th=[ 586], 99.90th=[ 635], 99.95th=[ 963], 00:17:22.247 | 99.99th=[ 963] 00:17:22.247 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:17:22.247 slat (nsec): min=11382, max=44067, avg=12595.24, stdev=1956.00 00:17:22.247 clat (usec): min=207, max=441, avg=272.66, stdev=23.83 00:17:22.247 lat (usec): min=219, max=481, avg=285.25, stdev=24.00 00:17:22.247 clat percentiles (usec): 00:17:22.247 | 1.00th=[ 229], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 253], 00:17:22.247 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:17:22.247 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 314], 00:17:22.247 | 99.00th=[ 334], 99.50th=[ 343], 99.90th=[ 437], 99.95th=[ 441], 00:17:22.247 | 99.99th=[ 441] 00:17:22.247 bw ( KiB/s): min= 6784, max= 6784, per=28.12%, avg=6784.00, stdev= 0.00, samples=1 00:17:22.247 iops : min= 1696, max= 1696, avg=1696.00, stdev= 0.00, samples=1 00:17:22.247 lat (usec) : 250=9.54%, 500=82.17%, 750=8.25%, 1000=0.04% 00:17:22.247 cpu : usr=3.50%, sys=3.70%, ctx=2715, majf=0, minf=1 00:17:22.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:22.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.247 issued rwts: total=1179,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:22.247 job1: (groupid=0, jobs=1): err= 0: pid=1957333: Mon Jul 15 11:43:50 2024 00:17:22.247 read: IOPS=1169, BW=4679KiB/s (4792kB/s)(4684KiB/1001msec) 00:17:22.247 slat (nsec): min=9026, max=25073, avg=9898.36, stdev=1063.86 00:17:22.247 clat (usec): min=395, max=976, avg=505.86, stdev=33.85 00:17:22.247 lat (usec): min=405, max=989, avg=515.76, stdev=33.91 00:17:22.247 clat percentiles (usec): 00:17:22.247 | 1.00th=[ 412], 5.00th=[ 437], 10.00th=[ 482], 20.00th=[ 490], 00:17:22.247 | 30.00th=[ 498], 40.00th=[ 502], 50.00th=[ 506], 60.00th=[ 515], 00:17:22.247 | 70.00th=[ 519], 80.00th=[ 529], 90.00th=[ 537], 95.00th=[ 545], 00:17:22.247 | 99.00th=[ 570], 99.50th=[ 594], 99.90th=[ 889], 99.95th=[ 979], 00:17:22.247 | 99.99th=[ 979] 00:17:22.247 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:17:22.247 slat (nsec): min=12148, max=44456, avg=13418.29, stdev=1774.27 00:17:22.247 clat (usec): min=181, max=1158, avg=239.19, stdev=40.47 00:17:22.247 lat (usec): min=194, max=1172, avg=252.61, stdev=40.60 00:17:22.247 clat percentiles (usec): 00:17:22.247 | 1.00th=[ 192], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 219], 00:17:22.247 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 239], 00:17:22.247 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 277], 95.00th=[ 285], 00:17:22.247 | 99.00th=[ 302], 99.50th=[ 318], 99.90th=[ 832], 99.95th=[ 1156], 00:17:22.247 | 99.99th=[ 1156] 00:17:22.247 bw ( KiB/s): min= 7648, max= 7648, per=31.70%, avg=7648.00, stdev= 0.00, samples=1 00:17:22.247 iops : min= 1912, max= 1912, avg=1912.00, stdev= 0.00, samples=1 00:17:22.247 lat (usec) : 250=40.52%, 500=32.80%, 750=26.52%, 1000=0.11% 00:17:22.247 lat (msec) : 2=0.04% 00:17:22.247 cpu : usr=3.40%, sys=4.00%, ctx=2708, majf=0, minf=1 00:17:22.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:22.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.247 issued rwts: total=1171,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:22.247 job2: (groupid=0, jobs=1): err= 0: pid=1957337: Mon Jul 15 11:43:50 2024 00:17:22.247 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:17:22.247 slat (nsec): min=9167, max=26913, avg=10051.43, stdev=1544.13 00:17:22.247 clat (usec): min=345, max=41255, avg=608.59, stdev=2196.15 00:17:22.247 lat (usec): min=355, max=41265, avg=618.64, stdev=2196.15 00:17:22.247 clat percentiles (usec): 00:17:22.247 | 1.00th=[ 408], 5.00th=[ 433], 10.00th=[ 437], 20.00th=[ 445], 00:17:22.247 | 30.00th=[ 453], 40.00th=[ 461], 50.00th=[ 474], 60.00th=[ 498], 00:17:22.247 | 70.00th=[ 515], 80.00th=[ 537], 90.00th=[ 562], 95.00th=[ 578], 00:17:22.247 | 99.00th=[ 619], 99.50th=[ 742], 99.90th=[41157], 99.95th=[41157], 00:17:22.247 | 99.99th=[41157] 00:17:22.247 write: IOPS=1428, BW=5714KiB/s (5851kB/s)(5720KiB/1001msec); 0 zone resets 00:17:22.247 slat (nsec): min=12080, max=47267, avg=13418.33, stdev=2218.69 00:17:22.247 clat (usec): min=190, max=4019, avg=238.06, stdev=102.71 00:17:22.247 lat (usec): min=204, max=4033, avg=251.48, stdev=102.87 00:17:22.247 clat percentiles (usec): 00:17:22.247 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 219], 00:17:22.247 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 237], 00:17:22.247 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 262], 95.00th=[ 277], 00:17:22.247 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 510], 99.95th=[ 4015], 00:17:22.247 | 99.99th=[ 4015] 00:17:22.247 bw ( KiB/s): min= 4096, max= 4096, per=16.98%, avg=4096.00, stdev= 0.00, samples=1 00:17:22.247 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:22.247 lat (usec) : 250=47.15%, 500=37.12%, 750=15.53%, 1000=0.04% 00:17:22.247 lat (msec) : 10=0.04%, 50=0.12% 00:17:22.247 cpu : usr=2.50%, sys=4.20%, ctx=2456, majf=0, minf=1 00:17:22.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:22.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.247 issued rwts: total=1024,1430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:22.247 job3: (groupid=0, jobs=1): err= 0: pid=1957338: Mon Jul 15 11:43:50 2024 00:17:22.247 read: IOPS=1119, BW=4480KiB/s (4587kB/s)(4484KiB/1001msec) 00:17:22.247 slat (nsec): min=8676, max=27726, avg=9594.11, stdev=1201.02 00:17:22.247 clat (usec): min=351, max=3048, avg=481.14, stdev=87.91 00:17:22.247 lat (usec): min=361, max=3060, avg=490.73, stdev=88.02 00:17:22.247 clat percentiles (usec): 00:17:22.247 | 1.00th=[ 404], 5.00th=[ 420], 10.00th=[ 429], 20.00th=[ 445], 00:17:22.247 | 30.00th=[ 453], 40.00th=[ 465], 50.00th=[ 474], 60.00th=[ 486], 00:17:22.247 | 70.00th=[ 498], 80.00th=[ 510], 90.00th=[ 529], 95.00th=[ 553], 00:17:22.247 | 99.00th=[ 586], 99.50th=[ 611], 99.90th=[ 791], 99.95th=[ 3064], 00:17:22.247 | 99.99th=[ 3064] 00:17:22.247 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:17:22.247 slat (nsec): min=11716, max=47296, avg=12805.87, stdev=1918.32 00:17:22.247 clat (usec): min=218, max=2203, avg=275.19, stdev=55.53 00:17:22.247 lat (usec): min=231, max=2218, avg=288.00, stdev=55.76 00:17:22.247 clat percentiles (usec): 00:17:22.247 | 1.00th=[ 233], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 255], 00:17:22.247 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:17:22.247 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 318], 00:17:22.247 | 99.00th=[ 355], 99.50th=[ 375], 99.90th=[ 603], 99.95th=[ 2212], 00:17:22.247 | 99.99th=[ 2212] 00:17:22.247 bw ( KiB/s): min= 6704, max= 6704, per=27.79%, avg=6704.00, stdev= 0.00, samples=1 00:17:22.247 iops : min= 1676, max= 1676, avg=1676.00, stdev= 0.00, samples=1 00:17:22.247 lat (usec) : 250=7.75%, 500=80.54%, 750=11.55%, 1000=0.08% 00:17:22.247 lat (msec) : 4=0.08% 00:17:22.247 cpu : usr=2.60%, sys=4.50%, ctx=2657, majf=0, minf=2 00:17:22.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:22.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:22.247 issued rwts: total=1121,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:22.247 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:22.247 00:17:22.247 Run status group 0 (all jobs): 00:17:22.247 READ: bw=17.5MiB/s (18.4MB/s), 4092KiB/s-4711KiB/s (4190kB/s-4824kB/s), io=17.6MiB (18.4MB), run=1001-1001msec 00:17:22.248 WRITE: bw=23.6MiB/s (24.7MB/s), 5714KiB/s-6138KiB/s (5851kB/s-6285kB/s), io=23.6MiB (24.7MB), run=1001-1001msec 00:17:22.248 00:17:22.248 Disk stats (read/write): 00:17:22.248 nvme0n1: ios=1074/1140, merge=0/0, ticks=508/296, in_queue=804, util=85.26% 00:17:22.248 nvme0n2: ios=1056/1146, merge=0/0, ticks=1546/259, in_queue=1805, util=99.18% 00:17:22.248 nvme0n3: ios=930/1024, merge=0/0, ticks=714/238, in_queue=952, util=97.22% 00:17:22.248 nvme0n4: ios=1081/1108, merge=0/0, ticks=564/294, in_queue=858, util=92.96% 00:17:22.248 11:43:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:22.248 [global] 00:17:22.248 thread=1 00:17:22.248 invalidate=1 00:17:22.248 rw=write 00:17:22.248 time_based=1 00:17:22.248 runtime=1 00:17:22.248 ioengine=libaio 00:17:22.248 direct=1 00:17:22.248 bs=4096 00:17:22.248 iodepth=128 00:17:22.248 norandommap=0 00:17:22.248 numjobs=1 00:17:22.248 00:17:22.248 verify_dump=1 00:17:22.248 verify_backlog=512 00:17:22.248 verify_state_save=0 00:17:22.248 do_verify=1 00:17:22.248 verify=crc32c-intel 00:17:22.248 [job0] 00:17:22.248 filename=/dev/nvme0n1 00:17:22.248 [job1] 00:17:22.248 filename=/dev/nvme0n2 00:17:22.248 [job2] 00:17:22.248 filename=/dev/nvme0n3 00:17:22.248 [job3] 00:17:22.248 filename=/dev/nvme0n4 00:17:22.248 Could not set queue depth (nvme0n1) 00:17:22.248 Could not set queue depth (nvme0n2) 00:17:22.248 Could not set queue depth (nvme0n3) 00:17:22.248 Could not set queue depth (nvme0n4) 00:17:22.507 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:22.507 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:22.507 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:22.507 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:22.507 fio-3.35 00:17:22.507 Starting 4 threads 00:17:23.928 00:17:23.928 job0: (groupid=0, jobs=1): err= 0: pid=1957754: Mon Jul 15 11:43:51 2024 00:17:23.928 read: IOPS=6083, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1010msec) 00:17:23.928 slat (usec): min=2, max=9530, avg=76.86, stdev=547.71 00:17:23.928 clat (usec): min=6711, max=25563, avg=10710.12, stdev=2541.98 00:17:23.928 lat (usec): min=6721, max=25573, avg=10786.98, stdev=2575.53 00:17:23.928 clat percentiles (usec): 00:17:23.928 | 1.00th=[ 7111], 5.00th=[ 7504], 10.00th=[ 8094], 20.00th=[ 9110], 00:17:23.928 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10421], 00:17:23.928 | 70.00th=[10814], 80.00th=[12780], 90.00th=[13829], 95.00th=[14746], 00:17:23.928 | 99.00th=[19530], 99.50th=[25560], 99.90th=[25560], 99.95th=[25560], 00:17:23.928 | 99.99th=[25560] 00:17:23.928 write: IOPS=6505, BW=25.4MiB/s (26.6MB/s)(25.7MiB/1010msec); 0 zone resets 00:17:23.928 slat (usec): min=3, max=17682, avg=72.34, stdev=504.30 00:17:23.928 clat (usec): min=1986, max=31488, avg=9481.23, stdev=3516.08 00:17:23.928 lat (usec): min=2003, max=31522, avg=9553.58, stdev=3529.43 00:17:23.928 clat percentiles (usec): 00:17:23.928 | 1.00th=[ 4146], 5.00th=[ 5669], 10.00th=[ 6063], 20.00th=[ 6915], 00:17:23.928 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9503], 00:17:23.928 | 70.00th=[ 9896], 80.00th=[11207], 90.00th=[13435], 95.00th=[14353], 00:17:23.928 | 99.00th=[23725], 99.50th=[25297], 99.90th=[25560], 99.95th=[25560], 00:17:23.928 | 99.99th=[31589] 00:17:23.928 bw ( KiB/s): min=24600, max=26944, per=34.60%, avg=25772.00, stdev=1657.46, samples=2 00:17:23.928 iops : min= 6150, max= 6736, avg=6443.00, stdev=414.36, samples=2 00:17:23.928 lat (msec) : 2=0.03%, 4=0.45%, 10=59.15%, 20=38.32%, 50=2.04% 00:17:23.928 cpu : usr=7.04%, sys=10.11%, ctx=465, majf=0, minf=1 00:17:23.928 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:23.928 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.928 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:23.928 issued rwts: total=6144,6571,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.928 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:23.928 job1: (groupid=0, jobs=1): err= 0: pid=1957755: Mon Jul 15 11:43:51 2024 00:17:23.928 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:17:23.928 slat (usec): min=3, max=31785, avg=115.25, stdev=1168.79 00:17:23.928 clat (usec): min=5574, max=58772, avg=17461.97, stdev=9685.05 00:17:23.928 lat (usec): min=5582, max=58832, avg=17577.22, stdev=9766.27 00:17:23.928 clat percentiles (usec): 00:17:23.928 | 1.00th=[ 7504], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[10421], 00:17:23.928 | 30.00th=[11600], 40.00th=[12256], 50.00th=[13960], 60.00th=[14615], 00:17:23.928 | 70.00th=[19006], 80.00th=[24511], 90.00th=[30802], 95.00th=[40109], 00:17:23.928 | 99.00th=[52691], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:17:23.928 | 99.99th=[58983] 00:17:23.928 write: IOPS=4390, BW=17.2MiB/s (18.0MB/s)(17.3MiB/1011msec); 0 zone resets 00:17:23.928 slat (usec): min=2, max=13710, avg=79.15, stdev=669.20 00:17:23.928 clat (usec): min=792, max=76454, avg=12802.23, stdev=10342.35 00:17:23.928 lat (usec): min=806, max=76469, avg=12881.38, stdev=10399.49 00:17:23.929 clat percentiles (usec): 00:17:23.929 | 1.00th=[ 1336], 5.00th=[ 3359], 10.00th=[ 4883], 20.00th=[ 6783], 00:17:23.929 | 30.00th=[ 8291], 40.00th=[ 9372], 50.00th=[10421], 60.00th=[11863], 00:17:23.929 | 70.00th=[12911], 80.00th=[15664], 90.00th=[21627], 95.00th=[26870], 00:17:23.929 | 99.00th=[65274], 99.50th=[71828], 99.90th=[74974], 99.95th=[74974], 00:17:23.929 | 99.99th=[76022] 00:17:23.929 bw ( KiB/s): min=15776, max=18712, per=23.15%, avg=17244.00, stdev=2076.07, samples=2 00:17:23.929 iops : min= 3944, max= 4678, avg=4311.00, stdev=519.02, samples=2 00:17:23.929 lat (usec) : 1000=0.11% 00:17:23.929 lat (msec) : 2=0.81%, 4=3.20%, 10=24.82%, 20=53.12%, 50=16.00% 00:17:23.929 lat (msec) : 100=1.94% 00:17:23.929 cpu : usr=5.15%, sys=6.73%, ctx=347, majf=0, minf=1 00:17:23.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:23.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:23.929 issued rwts: total=4096,4439,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.929 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:23.929 job2: (groupid=0, jobs=1): err= 0: pid=1957756: Mon Jul 15 11:43:51 2024 00:17:23.929 read: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec) 00:17:23.929 slat (usec): min=2, max=22235, avg=125.01, stdev=885.58 00:17:23.929 clat (usec): min=6995, max=65468, avg=15800.11, stdev=8572.43 00:17:23.929 lat (usec): min=7006, max=65479, avg=15925.12, stdev=8642.31 00:17:23.929 clat percentiles (usec): 00:17:23.929 | 1.00th=[ 7177], 5.00th=[10814], 10.00th=[10945], 20.00th=[11076], 00:17:23.929 | 30.00th=[11338], 40.00th=[11994], 50.00th=[12780], 60.00th=[13829], 00:17:23.929 | 70.00th=[15533], 80.00th=[20055], 90.00th=[21365], 95.00th=[28443], 00:17:23.929 | 99.00th=[58459], 99.50th=[63701], 99.90th=[65274], 99.95th=[65274], 00:17:23.929 | 99.99th=[65274] 00:17:23.929 write: IOPS=3401, BW=13.3MiB/s (13.9MB/s)(13.5MiB/1013msec); 0 zone resets 00:17:23.929 slat (usec): min=3, max=15061, avg=169.78, stdev=858.59 00:17:23.929 clat (usec): min=1985, max=65469, avg=23103.76, stdev=14107.05 00:17:23.929 lat (usec): min=2003, max=65486, avg=23273.54, stdev=14183.50 00:17:23.929 clat percentiles (usec): 00:17:23.929 | 1.00th=[ 3949], 5.00th=[ 7177], 10.00th=[ 8029], 20.00th=[ 9503], 00:17:23.929 | 30.00th=[11731], 40.00th=[14484], 50.00th=[18744], 60.00th=[22676], 00:17:23.929 | 70.00th=[34341], 80.00th=[39584], 90.00th=[44827], 95.00th=[46924], 00:17:23.929 | 99.00th=[51119], 99.50th=[53216], 99.90th=[55313], 99.95th=[65274], 00:17:23.929 | 99.99th=[65274] 00:17:23.929 bw ( KiB/s): min=12296, max=14256, per=17.82%, avg=13276.00, stdev=1385.93, samples=2 00:17:23.929 iops : min= 3074, max= 3564, avg=3319.00, stdev=346.48, samples=2 00:17:23.929 lat (msec) : 2=0.12%, 4=0.46%, 10=13.09%, 20=51.10%, 50=33.11% 00:17:23.929 lat (msec) : 100=2.12% 00:17:23.929 cpu : usr=5.43%, sys=4.25%, ctx=343, majf=0, minf=1 00:17:23.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:17:23.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:23.929 issued rwts: total=3072,3446,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.929 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:23.929 job3: (groupid=0, jobs=1): err= 0: pid=1957757: Mon Jul 15 11:43:51 2024 00:17:23.929 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:17:23.929 slat (usec): min=2, max=25575, avg=129.52, stdev=985.59 00:17:23.929 clat (usec): min=7221, max=89796, avg=17376.67, stdev=13183.57 00:17:23.929 lat (usec): min=7231, max=89810, avg=17506.18, stdev=13274.64 00:17:23.929 clat percentiles (usec): 00:17:23.929 | 1.00th=[ 9110], 5.00th=[10159], 10.00th=[10552], 20.00th=[10945], 00:17:23.929 | 30.00th=[11207], 40.00th=[12780], 50.00th=[13435], 60.00th=[14615], 00:17:23.929 | 70.00th=[15533], 80.00th=[17957], 90.00th=[24511], 95.00th=[56361], 00:17:23.929 | 99.00th=[78119], 99.50th=[83362], 99.90th=[89654], 99.95th=[89654], 00:17:23.929 | 99.99th=[89654] 00:17:23.929 write: IOPS=4358, BW=17.0MiB/s (17.8MB/s)(17.2MiB/1011msec); 0 zone resets 00:17:23.929 slat (usec): min=3, max=12587, avg=90.78, stdev=598.62 00:17:23.929 clat (usec): min=1426, max=48129, avg=12770.46, stdev=6397.17 00:17:23.929 lat (usec): min=1440, max=48147, avg=12861.24, stdev=6436.28 00:17:23.929 clat percentiles (usec): 00:17:23.929 | 1.00th=[ 4178], 5.00th=[ 6194], 10.00th=[ 7701], 20.00th=[ 9110], 00:17:23.929 | 30.00th=[ 9634], 40.00th=[10814], 50.00th=[11207], 60.00th=[11731], 00:17:23.929 | 70.00th=[13173], 80.00th=[14353], 90.00th=[19530], 95.00th=[21890], 00:17:23.929 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47973], 99.95th=[47973], 00:17:23.929 | 99.99th=[47973] 00:17:23.929 bw ( KiB/s): min=14512, max=19720, per=22.98%, avg=17116.00, stdev=3682.61, samples=2 00:17:23.929 iops : min= 3628, max= 4930, avg=4279.00, stdev=920.65, samples=2 00:17:23.929 lat (msec) : 2=0.14%, 4=0.36%, 10=18.75%, 20=70.01%, 50=8.08% 00:17:23.929 lat (msec) : 100=2.66% 00:17:23.929 cpu : usr=5.25%, sys=6.83%, ctx=425, majf=0, minf=1 00:17:23.929 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:23.929 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.929 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:23.929 issued rwts: total=4096,4406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.929 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:23.929 00:17:23.929 Run status group 0 (all jobs): 00:17:23.929 READ: bw=67.1MiB/s (70.4MB/s), 11.8MiB/s-23.8MiB/s (12.4MB/s-24.9MB/s), io=68.0MiB (71.3MB), run=1010-1013msec 00:17:23.929 WRITE: bw=72.7MiB/s (76.3MB/s), 13.3MiB/s-25.4MiB/s (13.9MB/s-26.6MB/s), io=73.7MiB (77.3MB), run=1010-1013msec 00:17:23.929 00:17:23.929 Disk stats (read/write): 00:17:23.929 nvme0n1: ios=5144/5275, merge=0/0, ticks=53253/47297, in_queue=100550, util=97.29% 00:17:23.929 nvme0n2: ios=3550/3584, merge=0/0, ticks=47004/36702, in_queue=83706, util=96.93% 00:17:23.929 nvme0n3: ios=2583/2635, merge=0/0, ticks=40433/62212, in_queue=102645, util=96.17% 00:17:23.929 nvme0n4: ios=3092/3549, merge=0/0, ticks=43306/38312, in_queue=81618, util=93.84% 00:17:23.929 11:43:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:23.929 [global] 00:17:23.929 thread=1 00:17:23.929 invalidate=1 00:17:23.929 rw=randwrite 00:17:23.929 time_based=1 00:17:23.929 runtime=1 00:17:23.929 ioengine=libaio 00:17:23.929 direct=1 00:17:23.929 bs=4096 00:17:23.929 iodepth=128 00:17:23.929 norandommap=0 00:17:23.929 numjobs=1 00:17:23.929 00:17:23.929 verify_dump=1 00:17:23.929 verify_backlog=512 00:17:23.929 verify_state_save=0 00:17:23.929 do_verify=1 00:17:23.929 verify=crc32c-intel 00:17:23.929 [job0] 00:17:23.929 filename=/dev/nvme0n1 00:17:23.929 [job1] 00:17:23.929 filename=/dev/nvme0n2 00:17:23.929 [job2] 00:17:23.929 filename=/dev/nvme0n3 00:17:23.929 [job3] 00:17:23.929 filename=/dev/nvme0n4 00:17:23.929 Could not set queue depth (nvme0n1) 00:17:23.929 Could not set queue depth (nvme0n2) 00:17:23.929 Could not set queue depth (nvme0n3) 00:17:23.929 Could not set queue depth (nvme0n4) 00:17:24.188 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:24.188 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:24.188 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:24.188 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:24.188 fio-3.35 00:17:24.188 Starting 4 threads 00:17:25.567 00:17:25.567 job0: (groupid=0, jobs=1): err= 0: pid=1958181: Mon Jul 15 11:43:53 2024 00:17:25.567 read: IOPS=3604, BW=14.1MiB/s (14.8MB/s)(14.1MiB/1002msec) 00:17:25.567 slat (nsec): min=1765, max=48428k, avg=106684.78, stdev=1195430.00 00:17:25.567 clat (usec): min=911, max=96115, avg=17593.08, stdev=13480.67 00:17:25.567 lat (usec): min=5109, max=96124, avg=17699.77, stdev=13585.21 00:17:25.567 clat percentiles (usec): 00:17:25.567 | 1.00th=[ 5276], 5.00th=[ 7308], 10.00th=[ 8029], 20.00th=[10028], 00:17:25.567 | 30.00th=[11076], 40.00th=[11863], 50.00th=[13042], 60.00th=[14484], 00:17:25.568 | 70.00th=[17433], 80.00th=[19268], 90.00th=[33817], 95.00th=[47449], 00:17:25.568 | 99.00th=[76022], 99.50th=[76022], 99.90th=[95945], 99.95th=[95945], 00:17:25.568 | 99.99th=[95945] 00:17:25.568 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:17:25.568 slat (usec): min=2, max=47040, avg=98.27, stdev=1034.93 00:17:25.568 clat (usec): min=883, max=68347, avg=15417.97, stdev=10797.32 00:17:25.568 lat (usec): min=895, max=68354, avg=15516.24, stdev=10829.71 00:17:25.568 clat percentiles (usec): 00:17:25.568 | 1.00th=[ 1827], 5.00th=[ 4146], 10.00th=[ 5800], 20.00th=[ 8455], 00:17:25.568 | 30.00th=[10159], 40.00th=[11338], 50.00th=[11994], 60.00th=[12911], 00:17:25.568 | 70.00th=[16319], 80.00th=[20841], 90.00th=[27132], 95.00th=[40109], 00:17:25.568 | 99.00th=[56886], 99.50th=[61604], 99.90th=[62129], 99.95th=[62129], 00:17:25.568 | 99.99th=[68682] 00:17:25.568 bw ( KiB/s): min=15592, max=16384, per=22.77%, avg=15988.00, stdev=560.03, samples=2 00:17:25.568 iops : min= 3898, max= 4096, avg=3997.00, stdev=140.01, samples=2 00:17:25.568 lat (usec) : 1000=0.13% 00:17:25.568 lat (msec) : 2=0.56%, 4=1.79%, 10=22.72%, 20=54.01%, 50=17.92% 00:17:25.568 lat (msec) : 100=2.88% 00:17:25.568 cpu : usr=3.40%, sys=5.39%, ctx=374, majf=0, minf=1 00:17:25.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:17:25.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:25.568 issued rwts: total=3612,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:25.568 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:25.568 job1: (groupid=0, jobs=1): err= 0: pid=1958182: Mon Jul 15 11:43:53 2024 00:17:25.568 read: IOPS=4235, BW=16.5MiB/s (17.3MB/s)(17.3MiB/1043msec) 00:17:25.568 slat (usec): min=2, max=11494, avg=89.40, stdev=538.97 00:17:25.568 clat (usec): min=5944, max=53333, avg=13177.26, stdev=7531.12 00:17:25.568 lat (usec): min=5950, max=53342, avg=13266.66, stdev=7548.98 00:17:25.568 clat percentiles (usec): 00:17:25.568 | 1.00th=[ 6980], 5.00th=[ 8291], 10.00th=[ 8848], 20.00th=[ 9503], 00:17:25.568 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[10945], 60.00th=[11731], 00:17:25.568 | 70.00th=[12518], 80.00th=[14353], 90.00th=[18482], 95.00th=[24249], 00:17:25.568 | 99.00th=[52691], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:17:25.568 | 99.99th=[53216] 00:17:25.568 write: IOPS=4418, BW=17.3MiB/s (18.1MB/s)(18.0MiB/1043msec); 0 zone resets 00:17:25.568 slat (usec): min=2, max=40750, avg=119.66, stdev=897.66 00:17:25.568 clat (usec): min=362, max=125675, avg=15952.49, stdev=17163.09 00:17:25.568 lat (usec): min=376, max=125688, avg=16072.15, stdev=17257.92 00:17:25.568 clat percentiles (usec): 00:17:25.568 | 1.00th=[ 725], 5.00th=[ 7373], 10.00th=[ 8029], 20.00th=[ 8979], 00:17:25.568 | 30.00th=[ 9634], 40.00th=[ 10290], 50.00th=[ 11076], 60.00th=[ 11600], 00:17:25.568 | 70.00th=[ 12780], 80.00th=[ 15401], 90.00th=[ 27132], 95.00th=[ 48497], 00:17:25.568 | 99.00th=[108528], 99.50th=[117965], 99.90th=[125305], 99.95th=[125305], 00:17:25.568 | 99.99th=[125305] 00:17:25.568 bw ( KiB/s): min=16384, max=20480, per=26.25%, avg=18432.00, stdev=2896.31, samples=2 00:17:25.568 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:17:25.568 lat (usec) : 500=0.47%, 750=0.06% 00:17:25.568 lat (msec) : 2=0.27%, 4=0.32%, 10=34.28%, 20=53.80%, 50=8.09% 00:17:25.568 lat (msec) : 100=2.07%, 250=0.65% 00:17:25.568 cpu : usr=4.03%, sys=6.24%, ctx=466, majf=0, minf=1 00:17:25.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:25.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:25.568 issued rwts: total=4418,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:25.568 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:25.568 job2: (groupid=0, jobs=1): err= 0: pid=1958184: Mon Jul 15 11:43:53 2024 00:17:25.568 read: IOPS=3520, BW=13.8MiB/s (14.4MB/s)(13.8MiB/1006msec) 00:17:25.568 slat (usec): min=2, max=16556, avg=143.61, stdev=956.23 00:17:25.568 clat (usec): min=2924, max=87185, avg=17034.84, stdev=9275.40 00:17:25.568 lat (usec): min=4604, max=87198, avg=17178.45, stdev=9356.03 00:17:25.568 clat percentiles (usec): 00:17:25.568 | 1.00th=[ 8225], 5.00th=[10159], 10.00th=[11076], 20.00th=[12256], 00:17:25.568 | 30.00th=[13435], 40.00th=[14746], 50.00th=[15270], 60.00th=[16057], 00:17:25.568 | 70.00th=[17433], 80.00th=[19792], 90.00th=[23200], 95.00th=[25560], 00:17:25.568 | 99.00th=[73925], 99.50th=[84411], 99.90th=[87557], 99.95th=[87557], 00:17:25.568 | 99.99th=[87557] 00:17:25.568 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:17:25.568 slat (usec): min=3, max=9849, avg=129.45, stdev=622.00 00:17:25.568 clat (usec): min=1945, max=87156, avg=18760.60, stdev=10779.06 00:17:25.568 lat (usec): min=1961, max=87163, avg=18890.06, stdev=10822.68 00:17:25.568 clat percentiles (usec): 00:17:25.568 | 1.00th=[ 3752], 5.00th=[ 8029], 10.00th=[ 9241], 20.00th=[11600], 00:17:25.568 | 30.00th=[13304], 40.00th=[13829], 50.00th=[15533], 60.00th=[19792], 00:17:25.568 | 70.00th=[21103], 80.00th=[24773], 90.00th=[28967], 95.00th=[33424], 00:17:25.568 | 99.00th=[77071], 99.50th=[79168], 99.90th=[80217], 99.95th=[87557], 00:17:25.568 | 99.99th=[87557] 00:17:25.568 bw ( KiB/s): min=12288, max=16384, per=20.41%, avg=14336.00, stdev=2896.31, samples=2 00:17:25.568 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:17:25.568 lat (msec) : 2=0.07%, 4=0.45%, 10=8.38%, 20=62.71%, 50=26.38% 00:17:25.568 lat (msec) : 100=2.01% 00:17:25.568 cpu : usr=3.78%, sys=5.17%, ctx=442, majf=0, minf=1 00:17:25.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:17:25.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:25.568 issued rwts: total=3542,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:25.568 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:25.568 job3: (groupid=0, jobs=1): err= 0: pid=1958185: Mon Jul 15 11:43:53 2024 00:17:25.568 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:17:25.568 slat (nsec): min=1797, max=8662.6k, avg=79373.52, stdev=532295.21 00:17:25.568 clat (usec): min=4223, max=21749, avg=10910.80, stdev=2445.36 00:17:25.568 lat (usec): min=4266, max=21777, avg=10990.17, stdev=2471.26 00:17:25.568 clat percentiles (usec): 00:17:25.568 | 1.00th=[ 5473], 5.00th=[ 7177], 10.00th=[ 8225], 20.00th=[ 8848], 00:17:25.568 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[10945], 60.00th=[11600], 00:17:25.568 | 70.00th=[11994], 80.00th=[12911], 90.00th=[13435], 95.00th=[15008], 00:17:25.568 | 99.00th=[19268], 99.50th=[19268], 99.90th=[19530], 99.95th=[19530], 00:17:25.568 | 99.99th=[21627] 00:17:25.568 write: IOPS=5993, BW=23.4MiB/s (24.5MB/s)(23.5MiB/1005msec); 0 zone resets 00:17:25.568 slat (usec): min=2, max=10127, avg=83.81, stdev=501.38 00:17:25.568 clat (usec): min=1924, max=54149, avg=10961.34, stdev=4768.83 00:17:25.568 lat (usec): min=1950, max=54153, avg=11045.15, stdev=4787.75 00:17:25.568 clat percentiles (usec): 00:17:25.568 | 1.00th=[ 4490], 5.00th=[ 5735], 10.00th=[ 6915], 20.00th=[ 7832], 00:17:25.568 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[11076], 00:17:25.568 | 70.00th=[11600], 80.00th=[12387], 90.00th=[14222], 95.00th=[16581], 00:17:25.568 | 99.00th=[31851], 99.50th=[37487], 99.90th=[54264], 99.95th=[54264], 00:17:25.568 | 99.99th=[54264] 00:17:25.568 bw ( KiB/s): min=22648, max=24520, per=33.58%, avg=23584.00, stdev=1323.70, samples=2 00:17:25.568 iops : min= 5662, max= 6130, avg=5896.00, stdev=330.93, samples=2 00:17:25.568 lat (msec) : 2=0.05%, 4=0.10%, 10=39.60%, 20=58.25%, 50=1.94% 00:17:25.568 lat (msec) : 100=0.06% 00:17:25.568 cpu : usr=4.88%, sys=8.47%, ctx=479, majf=0, minf=1 00:17:25.568 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:25.568 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.568 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:25.568 issued rwts: total=5632,6023,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:25.568 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:25.568 00:17:25.568 Run status group 0 (all jobs): 00:17:25.568 READ: bw=64.4MiB/s (67.6MB/s), 13.8MiB/s-21.9MiB/s (14.4MB/s-23.0MB/s), io=67.2MiB (70.5MB), run=1002-1043msec 00:17:25.568 WRITE: bw=68.6MiB/s (71.9MB/s), 13.9MiB/s-23.4MiB/s (14.6MB/s-24.5MB/s), io=71.5MiB (75.0MB), run=1002-1043msec 00:17:25.568 00:17:25.568 Disk stats (read/write): 00:17:25.568 nvme0n1: ios=3092/3093, merge=0/0, ticks=40416/34038, in_queue=74454, util=87.27% 00:17:25.568 nvme0n2: ios=3391/3584, merge=0/0, ticks=28549/44907, in_queue=73456, util=90.89% 00:17:25.568 nvme0n3: ios=2617/3071, merge=0/0, ticks=43343/57874, in_queue=101217, util=93.29% 00:17:25.568 nvme0n4: ios=4665/5120, merge=0/0, ticks=36750/39193, in_queue=75943, util=96.12% 00:17:25.568 11:43:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:25.568 11:43:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1958307 00:17:25.568 11:43:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:25.568 11:43:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:25.568 [global] 00:17:25.568 thread=1 00:17:25.568 invalidate=1 00:17:25.568 rw=read 00:17:25.568 time_based=1 00:17:25.568 runtime=10 00:17:25.568 ioengine=libaio 00:17:25.568 direct=1 00:17:25.568 bs=4096 00:17:25.568 iodepth=1 00:17:25.568 norandommap=1 00:17:25.568 numjobs=1 00:17:25.568 00:17:25.568 [job0] 00:17:25.568 filename=/dev/nvme0n1 00:17:25.568 [job1] 00:17:25.568 filename=/dev/nvme0n2 00:17:25.568 [job2] 00:17:25.568 filename=/dev/nvme0n3 00:17:25.568 [job3] 00:17:25.568 filename=/dev/nvme0n4 00:17:25.568 Could not set queue depth (nvme0n1) 00:17:25.568 Could not set queue depth (nvme0n2) 00:17:25.568 Could not set queue depth (nvme0n3) 00:17:25.568 Could not set queue depth (nvme0n4) 00:17:25.827 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:25.827 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:25.827 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:25.827 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:25.827 fio-3.35 00:17:25.827 Starting 4 threads 00:17:28.359 11:43:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:28.616 11:43:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:28.617 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=266240, buflen=4096 00:17:28.617 fio: pid=1958603, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:28.875 11:43:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:28.875 11:43:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:28.875 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=282624, buflen=4096 00:17:28.875 fio: pid=1958602, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:29.133 11:43:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:29.133 11:43:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:29.133 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=299008, buflen=4096 00:17:29.133 fio: pid=1958600, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:29.133 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=1417216, buflen=4096 00:17:29.133 fio: pid=1958601, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:29.133 11:43:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:29.133 11:43:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:29.392 00:17:29.392 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1958600: Mon Jul 15 11:43:57 2024 00:17:29.392 read: IOPS=24, BW=96.8KiB/s (99.1kB/s)(292KiB/3016msec) 00:17:29.392 slat (usec): min=10, max=23713, avg=540.79, stdev=3220.85 00:17:29.392 clat (usec): min=602, max=42247, avg=40470.63, stdev=4736.68 00:17:29.392 lat (usec): min=633, max=65960, avg=41018.51, stdev=5874.62 00:17:29.392 clat percentiles (usec): 00:17:29.392 | 1.00th=[ 603], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:17:29.392 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:29.392 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:17:29.392 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:29.392 | 99.99th=[42206] 00:17:29.392 bw ( KiB/s): min= 96, max= 104, per=14.22%, avg=99.20, stdev= 4.38, samples=5 00:17:29.392 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:17:29.392 lat (usec) : 750=1.35% 00:17:29.392 lat (msec) : 50=97.30% 00:17:29.392 cpu : usr=0.10%, sys=0.00%, ctx=76, majf=0, minf=1 00:17:29.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:29.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:29.392 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:29.392 issued rwts: total=74,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:29.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:29.392 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1958601: Mon Jul 15 11:43:57 2024 00:17:29.392 read: IOPS=109, BW=436KiB/s (446kB/s)(1384KiB/3177msec) 00:17:29.392 slat (usec): min=6, max=10692, avg=67.88, stdev=709.71 00:17:29.392 clat (usec): min=355, max=41959, avg=9082.80, stdev=16666.05 00:17:29.392 lat (usec): min=362, max=52041, avg=9150.85, stdev=16738.79 00:17:29.392 clat percentiles (usec): 00:17:29.392 | 1.00th=[ 363], 5.00th=[ 371], 10.00th=[ 379], 20.00th=[ 383], 00:17:29.392 | 30.00th=[ 388], 40.00th=[ 392], 50.00th=[ 396], 60.00th=[ 400], 00:17:29.392 | 70.00th=[ 482], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:17:29.392 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:17:29.392 | 99.99th=[42206] 00:17:29.392 bw ( KiB/s): min= 96, max= 466, per=22.98%, avg=160.33, stdev=149.80, samples=6 00:17:29.392 iops : min= 24, max= 116, avg=40.00, stdev=37.25, samples=6 00:17:29.392 lat (usec) : 500=73.49%, 750=4.90% 00:17:29.392 lat (msec) : 50=21.33% 00:17:29.392 cpu : usr=0.09%, sys=0.22%, ctx=351, majf=0, minf=1 00:17:29.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:29.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:29.392 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:29.392 issued rwts: total=347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:29.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:29.392 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1958602: Mon Jul 15 11:43:57 2024 00:17:29.392 read: IOPS=24, BW=98.1KiB/s (100kB/s)(276KiB/2814msec) 00:17:29.392 slat (nsec): min=9376, max=34408, avg=25023.33, stdev=3046.04 00:17:29.392 clat (usec): min=764, max=42378, avg=40426.81, stdev=4849.74 00:17:29.392 lat (usec): min=799, max=42388, avg=40451.82, stdev=4848.52 00:17:29.392 clat percentiles (usec): 00:17:29.392 | 1.00th=[ 766], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:17:29.392 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:29.392 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:29.392 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:29.392 | 99.99th=[42206] 00:17:29.392 bw ( KiB/s): min= 96, max= 104, per=13.93%, avg=97.60, stdev= 3.58, samples=5 00:17:29.392 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:17:29.392 lat (usec) : 1000=1.43% 00:17:29.392 lat (msec) : 50=97.14% 00:17:29.392 cpu : usr=0.00%, sys=0.11%, ctx=73, majf=0, minf=1 00:17:29.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:29.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:29.392 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:29.392 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:29.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:29.392 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1958603: Mon Jul 15 11:43:57 2024 00:17:29.392 read: IOPS=24, BW=97.9KiB/s (100kB/s)(260KiB/2657msec) 00:17:29.392 slat (nsec): min=15454, max=34431, avg=25171.55, stdev=2005.08 00:17:29.392 clat (usec): min=623, max=42026, avg=40452.03, stdev=5026.37 00:17:29.392 lat (usec): min=658, max=42051, avg=40477.19, stdev=5025.22 00:17:29.392 clat percentiles (usec): 00:17:29.392 | 1.00th=[ 627], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:17:29.392 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:29.392 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:17:29.392 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:29.392 | 99.99th=[42206] 00:17:29.392 bw ( KiB/s): min= 96, max= 104, per=13.93%, avg=97.60, stdev= 3.58, samples=5 00:17:29.392 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:17:29.392 lat (usec) : 750=1.52% 00:17:29.392 lat (msec) : 50=96.97% 00:17:29.392 cpu : usr=0.15%, sys=0.00%, ctx=66, majf=0, minf=2 00:17:29.392 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:29.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:29.392 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:29.392 issued rwts: total=66,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:29.392 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:29.392 00:17:29.392 Run status group 0 (all jobs): 00:17:29.392 READ: bw=696KiB/s (713kB/s), 96.8KiB/s-436KiB/s (99.1kB/s-446kB/s), io=2212KiB (2265kB), run=2657-3177msec 00:17:29.392 00:17:29.392 Disk stats (read/write): 00:17:29.392 nvme0n1: ios=69/0, merge=0/0, ticks=2791/0, in_queue=2791, util=93.79% 00:17:29.392 nvme0n2: ios=129/0, merge=0/0, ticks=3060/0, in_queue=3060, util=95.01% 00:17:29.392 nvme0n3: ios=101/0, merge=0/0, ticks=3409/0, in_queue=3409, util=99.22% 00:17:29.392 nvme0n4: ios=63/0, merge=0/0, ticks=2550/0, in_queue=2550, util=96.45% 00:17:29.392 11:43:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:29.392 11:43:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:29.651 11:43:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:29.651 11:43:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:29.910 11:43:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:29.910 11:43:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:29.910 11:43:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:29.910 11:43:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:30.169 11:43:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:30.169 11:43:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1958307 00:17:30.169 11:43:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:30.169 11:43:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:30.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:30.169 11:43:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:30.169 11:43:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:17:30.169 11:43:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:30.169 11:43:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:30.169 11:43:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:30.169 11:43:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:30.169 11:43:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:17:30.169 11:43:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:30.169 11:43:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:30.169 nvmf hotplug test: fio failed as expected 00:17:30.169 11:43:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:30.428 11:43:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:30.428 11:43:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:30.428 11:43:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:30.428 11:43:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:30.428 11:43:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:30.428 11:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:30.428 11:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:30.428 11:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:30.428 11:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:30.428 11:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:30.428 11:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:30.428 rmmod nvme_tcp 00:17:30.428 rmmod nvme_fabrics 00:17:30.428 rmmod nvme_keyring 00:17:30.428 11:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:30.428 11:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:30.428 11:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:30.428 11:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1955375 ']' 00:17:30.428 11:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1955375 00:17:30.428 11:43:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1955375 ']' 00:17:30.428 11:43:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1955375 00:17:30.686 11:43:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:17:30.686 11:43:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:30.686 11:43:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1955375 00:17:30.686 11:43:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:30.686 11:43:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:30.686 11:43:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1955375' 00:17:30.686 killing process with pid 1955375 00:17:30.686 11:43:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1955375 00:17:30.686 11:43:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1955375 00:17:30.686 11:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:30.686 11:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:30.686 11:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:30.686 11:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:30.686 11:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:30.686 11:43:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.686 11:43:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:30.686 11:43:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.219 11:44:00 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:33.219 00:17:33.219 real 0m28.543s 00:17:33.219 user 2m3.908s 00:17:33.219 sys 0m9.984s 00:17:33.219 11:44:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:33.219 11:44:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.219 ************************************ 00:17:33.219 END TEST nvmf_fio_target 00:17:33.219 ************************************ 00:17:33.219 11:44:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:33.219 11:44:00 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:33.219 11:44:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:33.219 11:44:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:33.219 11:44:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:33.219 ************************************ 00:17:33.219 START TEST nvmf_bdevio 00:17:33.219 ************************************ 00:17:33.219 11:44:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:33.219 * Looking for test storage... 00:17:33.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:33.220 11:44:01 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.787 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:39.787 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:39.788 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:39.788 Found net devices under 0000:af:00.0: cvl_0_0 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:39.788 Found net devices under 0000:af:00.1: cvl_0_1 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:39.788 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:40.048 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:40.048 11:44:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:40.048 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.048 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:17:40.048 00:17:40.048 --- 10.0.0.2 ping statistics --- 00:17:40.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.048 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:40.048 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.048 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:17:40.048 00:17:40.048 --- 10.0.0.1 ping statistics --- 00:17:40.048 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.048 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1963092 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1963092 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1963092 ']' 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.048 11:44:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:40.048 [2024-07-15 11:44:08.124359] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:17:40.048 [2024-07-15 11:44:08.124404] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.308 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.308 [2024-07-15 11:44:08.199807] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:40.308 [2024-07-15 11:44:08.272613] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.308 [2024-07-15 11:44:08.272657] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.308 [2024-07-15 11:44:08.272667] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.308 [2024-07-15 11:44:08.272676] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.308 [2024-07-15 11:44:08.272683] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.308 [2024-07-15 11:44:08.272802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:40.308 [2024-07-15 11:44:08.272915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:40.308 [2024-07-15 11:44:08.273022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:40.308 [2024-07-15 11:44:08.273023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:40.876 11:44:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.876 11:44:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:17:40.876 11:44:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:40.876 11:44:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:40.876 11:44:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:40.876 11:44:08 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.876 11:44:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:40.876 11:44:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.876 11:44:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:40.876 [2024-07-15 11:44:08.975636] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.135 11:44:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.135 11:44:08 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:41.135 11:44:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.135 11:44:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:41.135 Malloc0 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:41.135 [2024-07-15 11:44:09.029940] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:41.135 { 00:17:41.135 "params": { 00:17:41.135 "name": "Nvme$subsystem", 00:17:41.135 "trtype": "$TEST_TRANSPORT", 00:17:41.135 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:41.135 "adrfam": "ipv4", 00:17:41.135 "trsvcid": "$NVMF_PORT", 00:17:41.135 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:41.135 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:41.135 "hdgst": ${hdgst:-false}, 00:17:41.135 "ddgst": ${ddgst:-false} 00:17:41.135 }, 00:17:41.135 "method": "bdev_nvme_attach_controller" 00:17:41.135 } 00:17:41.135 EOF 00:17:41.135 )") 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:17:41.135 11:44:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:41.135 "params": { 00:17:41.135 "name": "Nvme1", 00:17:41.135 "trtype": "tcp", 00:17:41.135 "traddr": "10.0.0.2", 00:17:41.135 "adrfam": "ipv4", 00:17:41.135 "trsvcid": "4420", 00:17:41.135 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.135 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:41.135 "hdgst": false, 00:17:41.135 "ddgst": false 00:17:41.135 }, 00:17:41.135 "method": "bdev_nvme_attach_controller" 00:17:41.135 }' 00:17:41.135 [2024-07-15 11:44:09.081370] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:17:41.135 [2024-07-15 11:44:09.081418] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1963151 ] 00:17:41.135 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.135 [2024-07-15 11:44:09.152615] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:41.135 [2024-07-15 11:44:09.224822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.135 [2024-07-15 11:44:09.224919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.135 [2024-07-15 11:44:09.224920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.703 I/O targets: 00:17:41.703 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:41.703 00:17:41.703 00:17:41.703 CUnit - A unit testing framework for C - Version 2.1-3 00:17:41.703 http://cunit.sourceforge.net/ 00:17:41.703 00:17:41.703 00:17:41.703 Suite: bdevio tests on: Nvme1n1 00:17:41.703 Test: blockdev write read block ...passed 00:17:41.703 Test: blockdev write zeroes read block ...passed 00:17:41.703 Test: blockdev write zeroes read no split ...passed 00:17:41.703 Test: blockdev write zeroes read split ...passed 00:17:41.703 Test: blockdev write zeroes read split partial ...passed 00:17:41.703 Test: blockdev reset ...[2024-07-15 11:44:09.756528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:41.703 [2024-07-15 11:44:09.756589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d4810 (9): Bad file descriptor 00:17:41.703 [2024-07-15 11:44:09.808508] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:41.703 passed 00:17:41.962 Test: blockdev write read 8 blocks ...passed 00:17:41.962 Test: blockdev write read size > 128k ...passed 00:17:41.962 Test: blockdev write read invalid size ...passed 00:17:41.962 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:41.962 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:41.962 Test: blockdev write read max offset ...passed 00:17:41.962 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:41.962 Test: blockdev writev readv 8 blocks ...passed 00:17:41.962 Test: blockdev writev readv 30 x 1block ...passed 00:17:41.962 Test: blockdev writev readv block ...passed 00:17:41.962 Test: blockdev writev readv size > 128k ...passed 00:17:42.221 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:42.221 Test: blockdev comparev and writev ...[2024-07-15 11:44:10.069510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.221 [2024-07-15 11:44:10.069543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.221 [2024-07-15 11:44:10.069559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.221 [2024-07-15 11:44:10.069570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:42.221 [2024-07-15 11:44:10.069896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.221 [2024-07-15 11:44:10.069910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:42.221 [2024-07-15 11:44:10.069925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.221 [2024-07-15 11:44:10.069934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:42.221 [2024-07-15 11:44:10.070257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.221 [2024-07-15 11:44:10.070269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:42.221 [2024-07-15 11:44:10.070284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.221 [2024-07-15 11:44:10.070294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:42.221 [2024-07-15 11:44:10.070601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.221 [2024-07-15 11:44:10.070615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:42.222 [2024-07-15 11:44:10.070629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:42.222 [2024-07-15 11:44:10.070640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:42.222 passed 00:17:42.222 Test: blockdev nvme passthru rw ...passed 00:17:42.222 Test: blockdev nvme passthru vendor specific ...[2024-07-15 11:44:10.153229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:42.222 [2024-07-15 11:44:10.153247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:42.222 [2024-07-15 11:44:10.153453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:42.222 [2024-07-15 11:44:10.153467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:42.222 [2024-07-15 11:44:10.153673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:42.222 [2024-07-15 11:44:10.153685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:42.222 [2024-07-15 11:44:10.153892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:42.222 [2024-07-15 11:44:10.153904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:42.222 passed 00:17:42.222 Test: blockdev nvme admin passthru ...passed 00:17:42.222 Test: blockdev copy ...passed 00:17:42.222 00:17:42.222 Run Summary: Type Total Ran Passed Failed Inactive 00:17:42.222 suites 1 1 n/a 0 0 00:17:42.222 tests 23 23 23 0 0 00:17:42.222 asserts 152 152 152 0 n/a 00:17:42.222 00:17:42.222 Elapsed time = 1.334 seconds 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:42.481 rmmod nvme_tcp 00:17:42.481 rmmod nvme_fabrics 00:17:42.481 rmmod nvme_keyring 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1963092 ']' 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1963092 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1963092 ']' 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1963092 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1963092 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1963092' 00:17:42.481 killing process with pid 1963092 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1963092 00:17:42.481 11:44:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1963092 00:17:42.741 11:44:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:42.741 11:44:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:42.741 11:44:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:42.741 11:44:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.741 11:44:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:42.741 11:44:10 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.741 11:44:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.741 11:44:10 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.286 11:44:12 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:45.286 00:17:45.286 real 0m11.838s 00:17:45.286 user 0m14.172s 00:17:45.286 sys 0m5.971s 00:17:45.286 11:44:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:45.286 11:44:12 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:17:45.286 ************************************ 00:17:45.286 END TEST nvmf_bdevio 00:17:45.286 ************************************ 00:17:45.286 11:44:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:45.286 11:44:12 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:45.286 11:44:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:45.286 11:44:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:45.286 11:44:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:45.286 ************************************ 00:17:45.286 START TEST nvmf_auth_target 00:17:45.286 ************************************ 00:17:45.286 11:44:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:45.286 * Looking for test storage... 00:17:45.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:45.286 11:44:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:45.286 11:44:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:45.286 11:44:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.286 11:44:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.286 11:44:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.286 11:44:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.286 11:44:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.286 11:44:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.286 11:44:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.286 11:44:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.286 11:44:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.286 11:44:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.286 11:44:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:45.286 11:44:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:17:45.286 11:44:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.286 11:44:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.286 11:44:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:45.286 11:44:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:45.286 11:44:12 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:45.286 11:44:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:51.890 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:51.890 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:51.890 Found net devices under 0000:af:00.0: cvl_0_0 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.890 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:51.891 Found net devices under 0000:af:00.1: cvl_0_1 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:51.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:17:51.891 00:17:51.891 --- 10.0.0.2 ping statistics --- 00:17:51.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.891 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:51.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:17:51.891 00:17:51.891 --- 10.0.0.1 ping statistics --- 00:17:51.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.891 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:51.891 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:52.149 11:44:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:52.149 11:44:19 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:52.149 11:44:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:52.149 11:44:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.149 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1967127 00:17:52.149 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:52.149 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1967127 00:17:52.149 11:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1967127 ']' 00:17:52.149 11:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.149 11:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.149 11:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.149 11:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.149 11:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.085 11:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.085 11:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:53.085 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:53.085 11:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:53.085 11:44:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.085 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.085 11:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1967346 00:17:53.085 11:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:53.085 11:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:53.085 11:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:53.085 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:53.085 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:53.085 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:53.085 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:53.085 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0d40cbece62a57e3677e56f8e9279f86ceffa844ac7027b3 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.put 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0d40cbece62a57e3677e56f8e9279f86ceffa844ac7027b3 0 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0d40cbece62a57e3677e56f8e9279f86ceffa844ac7027b3 0 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0d40cbece62a57e3677e56f8e9279f86ceffa844ac7027b3 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.put 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.put 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.put 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d59acf9ac051a09c95d1e99e911827489caff144280cea6f7db097f723e07d9a 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ayW 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d59acf9ac051a09c95d1e99e911827489caff144280cea6f7db097f723e07d9a 3 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d59acf9ac051a09c95d1e99e911827489caff144280cea6f7db097f723e07d9a 3 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d59acf9ac051a09c95d1e99e911827489caff144280cea6f7db097f723e07d9a 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:53.086 11:44:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ayW 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ayW 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.ayW 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4d67082cf028f5913882267453fe131a 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.BUT 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4d67082cf028f5913882267453fe131a 1 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4d67082cf028f5913882267453fe131a 1 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4d67082cf028f5913882267453fe131a 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.BUT 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.BUT 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.BUT 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=60dc296261e2affddbe2858d76ff2d9d91bf6d2c932c547c 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.uq8 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 60dc296261e2affddbe2858d76ff2d9d91bf6d2c932c547c 2 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 60dc296261e2affddbe2858d76ff2d9d91bf6d2c932c547c 2 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=60dc296261e2affddbe2858d76ff2d9d91bf6d2c932c547c 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.uq8 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.uq8 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.uq8 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=56f915901409d9fff9b57c26c0830c5354fee25685b0fb28 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.U0t 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 56f915901409d9fff9b57c26c0830c5354fee25685b0fb28 2 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 56f915901409d9fff9b57c26c0830c5354fee25685b0fb28 2 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=56f915901409d9fff9b57c26c0830c5354fee25685b0fb28 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:53.086 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.U0t 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.U0t 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.U0t 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=48f1389eb82a8e2676cfd4b7bb2b415d 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.XIH 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 48f1389eb82a8e2676cfd4b7bb2b415d 1 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 48f1389eb82a8e2676cfd4b7bb2b415d 1 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=48f1389eb82a8e2676cfd4b7bb2b415d 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.XIH 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.XIH 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.XIH 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=060cb9cd452091042b0fe2aee8f53b6ea302753ebb7fc682f452f41dba8f83c5 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.TVl 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 060cb9cd452091042b0fe2aee8f53b6ea302753ebb7fc682f452f41dba8f83c5 3 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 060cb9cd452091042b0fe2aee8f53b6ea302753ebb7fc682f452f41dba8f83c5 3 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=060cb9cd452091042b0fe2aee8f53b6ea302753ebb7fc682f452f41dba8f83c5 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.TVl 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.TVl 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.TVl 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1967127 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1967127 ']' 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.345 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.346 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.346 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.616 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.616 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:53.616 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1967346 /var/tmp/host.sock 00:17:53.616 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1967346 ']' 00:17:53.616 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:17:53.616 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.616 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:53.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:53.616 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.616 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.617 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.617 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:53.617 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:53.617 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.617 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.876 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.876 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:53.876 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.put 00:17:53.876 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.876 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.876 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.876 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.put 00:17:53.876 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.put 00:17:53.876 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.ayW ]] 00:17:53.876 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ayW 00:17:53.876 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.876 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.876 11:44:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.876 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ayW 00:17:53.876 11:44:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ayW 00:17:54.135 11:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:54.135 11:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.BUT 00:17:54.135 11:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.135 11:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.135 11:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.135 11:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.BUT 00:17:54.135 11:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.BUT 00:17:54.393 11:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.uq8 ]] 00:17:54.393 11:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uq8 00:17:54.393 11:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.393 11:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.393 11:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.393 11:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uq8 00:17:54.394 11:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uq8 00:17:54.394 11:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:54.394 11:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.U0t 00:17:54.394 11:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.394 11:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.394 11:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.394 11:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.U0t 00:17:54.394 11:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.U0t 00:17:54.652 11:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.XIH ]] 00:17:54.652 11:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XIH 00:17:54.653 11:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.653 11:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.653 11:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.653 11:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XIH 00:17:54.653 11:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XIH 00:17:54.911 11:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:54.911 11:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.TVl 00:17:54.911 11:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.911 11:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.911 11:44:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.911 11:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.TVl 00:17:54.911 11:44:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.TVl 00:17:55.170 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:55.170 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:55.170 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.170 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.170 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:55.170 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:55.170 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:17:55.170 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.170 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:55.170 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:55.170 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:55.170 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.170 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.170 11:44:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.170 11:44:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.170 11:44:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.170 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.170 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.429 00:17:55.429 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.429 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.429 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.687 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.687 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.687 11:44:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.687 11:44:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.687 11:44:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.687 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.687 { 00:17:55.687 "cntlid": 1, 00:17:55.687 "qid": 0, 00:17:55.687 "state": "enabled", 00:17:55.687 "thread": "nvmf_tgt_poll_group_000", 00:17:55.687 "listen_address": { 00:17:55.687 "trtype": "TCP", 00:17:55.687 "adrfam": "IPv4", 00:17:55.687 "traddr": "10.0.0.2", 00:17:55.687 "trsvcid": "4420" 00:17:55.687 }, 00:17:55.687 "peer_address": { 00:17:55.687 "trtype": "TCP", 00:17:55.687 "adrfam": "IPv4", 00:17:55.687 "traddr": "10.0.0.1", 00:17:55.687 "trsvcid": "53296" 00:17:55.687 }, 00:17:55.687 "auth": { 00:17:55.687 "state": "completed", 00:17:55.687 "digest": "sha256", 00:17:55.687 "dhgroup": "null" 00:17:55.687 } 00:17:55.687 } 00:17:55.687 ]' 00:17:55.687 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.687 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.688 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.688 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:55.688 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.688 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.688 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.688 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.946 11:44:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MGQ0MGNiZWNlNjJhNTdlMzY3N2U1NmY4ZTkyNzlmODZjZWZmYTg0NGFjNzAyN2IzU/qdXA==: --dhchap-ctrl-secret DHHC-1:03:ZDU5YWNmOWFjMDUxYTA5Yzk1ZDFlOTllOTExODI3NDg5Y2FmZjE0NDI4MGNlYTZmN2RiMDk3ZjcyM2UwN2Q5YfYdcls=: 00:17:56.514 11:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.514 11:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:56.514 11:44:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.514 11:44:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.514 11:44:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.514 11:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.514 11:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:56.514 11:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:56.773 11:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:17:56.773 11:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.773 11:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:56.773 11:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:56.773 11:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:56.773 11:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.773 11:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.773 11:44:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.773 11:44:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.773 11:44:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.773 11:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.773 11:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.032 00:17:57.032 11:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.032 11:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.032 11:44:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.032 11:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.032 11:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.032 11:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.032 11:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.032 11:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.032 11:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.032 { 00:17:57.032 "cntlid": 3, 00:17:57.032 "qid": 0, 00:17:57.032 "state": "enabled", 00:17:57.032 "thread": "nvmf_tgt_poll_group_000", 00:17:57.032 "listen_address": { 00:17:57.032 "trtype": "TCP", 00:17:57.032 "adrfam": "IPv4", 00:17:57.032 "traddr": "10.0.0.2", 00:17:57.032 "trsvcid": "4420" 00:17:57.032 }, 00:17:57.032 "peer_address": { 00:17:57.032 "trtype": "TCP", 00:17:57.032 "adrfam": "IPv4", 00:17:57.032 "traddr": "10.0.0.1", 00:17:57.032 "trsvcid": "53314" 00:17:57.032 }, 00:17:57.032 "auth": { 00:17:57.032 "state": "completed", 00:17:57.032 "digest": "sha256", 00:17:57.032 "dhgroup": "null" 00:17:57.032 } 00:17:57.032 } 00:17:57.032 ]' 00:17:57.032 11:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.291 11:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.291 11:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.291 11:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:57.291 11:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.291 11:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.291 11:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.291 11:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.550 11:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGQ2NzA4MmNmMDI4ZjU5MTM4ODIyNjc0NTNmZTEzMWEKbvth: --dhchap-ctrl-secret DHHC-1:02:NjBkYzI5NjI2MWUyYWZmZGRiZTI4NThkNzZmZjJkOWQ5MWJmNmQyYzkzMmM1NDdjmVBXQg==: 00:17:58.119 11:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.119 11:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:58.119 11:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.119 11:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.119 11:44:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.119 11:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.119 11:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:58.119 11:44:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:58.119 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:17:58.119 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.119 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:58.119 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:58.119 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:58.119 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.119 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.119 11:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.119 11:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.119 11:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.119 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.119 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.378 00:17:58.378 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.378 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.378 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.637 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.637 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.637 11:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.637 11:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.637 11:44:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.637 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.637 { 00:17:58.637 "cntlid": 5, 00:17:58.637 "qid": 0, 00:17:58.637 "state": "enabled", 00:17:58.637 "thread": "nvmf_tgt_poll_group_000", 00:17:58.637 "listen_address": { 00:17:58.637 "trtype": "TCP", 00:17:58.637 "adrfam": "IPv4", 00:17:58.637 "traddr": "10.0.0.2", 00:17:58.637 "trsvcid": "4420" 00:17:58.637 }, 00:17:58.637 "peer_address": { 00:17:58.637 "trtype": "TCP", 00:17:58.637 "adrfam": "IPv4", 00:17:58.637 "traddr": "10.0.0.1", 00:17:58.637 "trsvcid": "53338" 00:17:58.637 }, 00:17:58.637 "auth": { 00:17:58.637 "state": "completed", 00:17:58.637 "digest": "sha256", 00:17:58.637 "dhgroup": "null" 00:17:58.637 } 00:17:58.637 } 00:17:58.637 ]' 00:17:58.637 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.637 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:58.637 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.637 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:58.637 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.637 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.637 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.637 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.896 11:44:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTZmOTE1OTAxNDA5ZDlmZmY5YjU3YzI2YzA4MzBjNTM1NGZlZTI1Njg1YjBmYjI42N6jcw==: --dhchap-ctrl-secret DHHC-1:01:NDhmMTM4OWViODJhOGUyNjc2Y2ZkNGI3YmIyYjQxNWSnSqty: 00:17:59.464 11:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.464 11:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:17:59.464 11:44:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.464 11:44:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.464 11:44:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.464 11:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.464 11:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:59.464 11:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:59.723 11:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:17:59.723 11:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.723 11:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:59.723 11:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:59.723 11:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:59.723 11:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.723 11:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:17:59.723 11:44:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.723 11:44:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.723 11:44:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.723 11:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.723 11:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.723 00:17:59.982 11:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.982 11:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.982 11:44:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.982 11:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.982 11:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.982 11:44:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.982 11:44:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.982 11:44:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.982 11:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.982 { 00:17:59.982 "cntlid": 7, 00:17:59.982 "qid": 0, 00:17:59.982 "state": "enabled", 00:17:59.982 "thread": "nvmf_tgt_poll_group_000", 00:17:59.982 "listen_address": { 00:17:59.982 "trtype": "TCP", 00:17:59.982 "adrfam": "IPv4", 00:17:59.982 "traddr": "10.0.0.2", 00:17:59.982 "trsvcid": "4420" 00:17:59.982 }, 00:17:59.982 "peer_address": { 00:17:59.982 "trtype": "TCP", 00:17:59.982 "adrfam": "IPv4", 00:17:59.982 "traddr": "10.0.0.1", 00:17:59.982 "trsvcid": "53360" 00:17:59.982 }, 00:17:59.982 "auth": { 00:17:59.982 "state": "completed", 00:17:59.982 "digest": "sha256", 00:17:59.982 "dhgroup": "null" 00:17:59.982 } 00:17:59.982 } 00:17:59.982 ]' 00:17:59.982 11:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.982 11:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.982 11:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.241 11:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:00.241 11:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.241 11:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.241 11:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.241 11:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.241 11:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MDYwY2I5Y2Q0NTIwOTEwNDJiMGZlMmFlZThmNTNiNmVhMzAyNzUzZWJiN2ZjNjgyZjQ1MmY0MWRiYThmODNjNaXvLAw=: 00:18:00.808 11:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.808 11:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:00.808 11:44:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.808 11:44:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.808 11:44:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.808 11:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:00.808 11:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.808 11:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:00.808 11:44:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:01.067 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:01.067 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.067 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:01.067 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:01.067 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:01.067 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.067 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.067 11:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.067 11:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.067 11:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.067 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.067 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.326 00:18:01.326 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.326 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.326 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.588 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.588 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.588 11:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.588 11:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.588 11:44:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.588 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.588 { 00:18:01.588 "cntlid": 9, 00:18:01.588 "qid": 0, 00:18:01.588 "state": "enabled", 00:18:01.588 "thread": "nvmf_tgt_poll_group_000", 00:18:01.588 "listen_address": { 00:18:01.588 "trtype": "TCP", 00:18:01.588 "adrfam": "IPv4", 00:18:01.588 "traddr": "10.0.0.2", 00:18:01.588 "trsvcid": "4420" 00:18:01.588 }, 00:18:01.588 "peer_address": { 00:18:01.588 "trtype": "TCP", 00:18:01.588 "adrfam": "IPv4", 00:18:01.588 "traddr": "10.0.0.1", 00:18:01.588 "trsvcid": "53388" 00:18:01.588 }, 00:18:01.588 "auth": { 00:18:01.588 "state": "completed", 00:18:01.588 "digest": "sha256", 00:18:01.588 "dhgroup": "ffdhe2048" 00:18:01.588 } 00:18:01.588 } 00:18:01.588 ]' 00:18:01.588 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.588 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:01.588 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.588 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:01.588 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.588 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.588 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.588 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.848 11:44:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MGQ0MGNiZWNlNjJhNTdlMzY3N2U1NmY4ZTkyNzlmODZjZWZmYTg0NGFjNzAyN2IzU/qdXA==: --dhchap-ctrl-secret DHHC-1:03:ZDU5YWNmOWFjMDUxYTA5Yzk1ZDFlOTllOTExODI3NDg5Y2FmZjE0NDI4MGNlYTZmN2RiMDk3ZjcyM2UwN2Q5YfYdcls=: 00:18:02.415 11:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.415 11:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:02.415 11:44:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.415 11:44:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.415 11:44:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.415 11:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.415 11:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:02.415 11:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:02.674 11:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:02.674 11:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.674 11:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:02.674 11:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:02.674 11:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:02.674 11:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.674 11:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.674 11:44:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.674 11:44:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.674 11:44:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.674 11:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.674 11:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.674 00:18:02.674 11:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.674 11:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.674 11:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.932 11:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.932 11:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.932 11:44:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.932 11:44:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.932 11:44:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.933 11:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.933 { 00:18:02.933 "cntlid": 11, 00:18:02.933 "qid": 0, 00:18:02.933 "state": "enabled", 00:18:02.933 "thread": "nvmf_tgt_poll_group_000", 00:18:02.933 "listen_address": { 00:18:02.933 "trtype": "TCP", 00:18:02.933 "adrfam": "IPv4", 00:18:02.933 "traddr": "10.0.0.2", 00:18:02.933 "trsvcid": "4420" 00:18:02.933 }, 00:18:02.933 "peer_address": { 00:18:02.933 "trtype": "TCP", 00:18:02.933 "adrfam": "IPv4", 00:18:02.933 "traddr": "10.0.0.1", 00:18:02.933 "trsvcid": "53406" 00:18:02.933 }, 00:18:02.933 "auth": { 00:18:02.933 "state": "completed", 00:18:02.933 "digest": "sha256", 00:18:02.933 "dhgroup": "ffdhe2048" 00:18:02.933 } 00:18:02.933 } 00:18:02.933 ]' 00:18:02.933 11:44:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.933 11:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.933 11:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.191 11:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:03.191 11:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.191 11:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.191 11:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.191 11:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.191 11:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGQ2NzA4MmNmMDI4ZjU5MTM4ODIyNjc0NTNmZTEzMWEKbvth: --dhchap-ctrl-secret DHHC-1:02:NjBkYzI5NjI2MWUyYWZmZGRiZTI4NThkNzZmZjJkOWQ5MWJmNmQyYzkzMmM1NDdjmVBXQg==: 00:18:03.759 11:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.759 11:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:03.759 11:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.759 11:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.759 11:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.759 11:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.759 11:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:03.759 11:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:04.018 11:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:04.018 11:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.018 11:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:04.018 11:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:04.018 11:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:04.018 11:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.018 11:44:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.018 11:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.018 11:44:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.018 11:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.018 11:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.018 11:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.277 00:18:04.277 11:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.277 11:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.277 11:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.537 11:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.537 11:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.537 11:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.537 11:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.537 11:44:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.537 11:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.537 { 00:18:04.537 "cntlid": 13, 00:18:04.537 "qid": 0, 00:18:04.537 "state": "enabled", 00:18:04.537 "thread": "nvmf_tgt_poll_group_000", 00:18:04.537 "listen_address": { 00:18:04.537 "trtype": "TCP", 00:18:04.537 "adrfam": "IPv4", 00:18:04.537 "traddr": "10.0.0.2", 00:18:04.537 "trsvcid": "4420" 00:18:04.537 }, 00:18:04.537 "peer_address": { 00:18:04.537 "trtype": "TCP", 00:18:04.537 "adrfam": "IPv4", 00:18:04.537 "traddr": "10.0.0.1", 00:18:04.537 "trsvcid": "41586" 00:18:04.537 }, 00:18:04.537 "auth": { 00:18:04.537 "state": "completed", 00:18:04.537 "digest": "sha256", 00:18:04.537 "dhgroup": "ffdhe2048" 00:18:04.537 } 00:18:04.537 } 00:18:04.537 ]' 00:18:04.537 11:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.537 11:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.537 11:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.537 11:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:04.537 11:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.537 11:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.537 11:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.537 11:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.795 11:44:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTZmOTE1OTAxNDA5ZDlmZmY5YjU3YzI2YzA4MzBjNTM1NGZlZTI1Njg1YjBmYjI42N6jcw==: --dhchap-ctrl-secret DHHC-1:01:NDhmMTM4OWViODJhOGUyNjc2Y2ZkNGI3YmIyYjQxNWSnSqty: 00:18:05.359 11:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.359 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.359 11:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:05.359 11:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.359 11:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.359 11:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.359 11:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.359 11:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:05.359 11:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:05.617 11:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:05.617 11:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.617 11:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:05.617 11:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:05.617 11:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:05.617 11:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.617 11:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:05.617 11:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.617 11:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.617 11:44:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.617 11:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.617 11:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.875 00:18:05.875 11:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.875 11:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.875 11:44:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.134 11:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.134 11:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.134 11:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.134 11:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.134 11:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.134 11:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.134 { 00:18:06.134 "cntlid": 15, 00:18:06.134 "qid": 0, 00:18:06.134 "state": "enabled", 00:18:06.134 "thread": "nvmf_tgt_poll_group_000", 00:18:06.134 "listen_address": { 00:18:06.134 "trtype": "TCP", 00:18:06.134 "adrfam": "IPv4", 00:18:06.134 "traddr": "10.0.0.2", 00:18:06.134 "trsvcid": "4420" 00:18:06.134 }, 00:18:06.134 "peer_address": { 00:18:06.134 "trtype": "TCP", 00:18:06.134 "adrfam": "IPv4", 00:18:06.134 "traddr": "10.0.0.1", 00:18:06.134 "trsvcid": "41614" 00:18:06.134 }, 00:18:06.134 "auth": { 00:18:06.134 "state": "completed", 00:18:06.134 "digest": "sha256", 00:18:06.134 "dhgroup": "ffdhe2048" 00:18:06.134 } 00:18:06.134 } 00:18:06.134 ]' 00:18:06.134 11:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.134 11:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.134 11:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.135 11:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:06.135 11:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.135 11:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.135 11:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.135 11:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.393 11:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MDYwY2I5Y2Q0NTIwOTEwNDJiMGZlMmFlZThmNTNiNmVhMzAyNzUzZWJiN2ZjNjgyZjQ1MmY0MWRiYThmODNjNaXvLAw=: 00:18:06.961 11:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.961 11:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:06.961 11:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.961 11:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.961 11:44:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.961 11:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.961 11:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.961 11:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:06.961 11:44:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:06.961 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:06.961 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.961 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:06.961 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:06.961 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:06.961 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.961 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.961 11:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.961 11:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.961 11:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.961 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.961 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.219 00:18:07.219 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.219 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.219 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.478 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.478 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.479 11:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.479 11:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.479 11:44:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.479 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.479 { 00:18:07.479 "cntlid": 17, 00:18:07.479 "qid": 0, 00:18:07.479 "state": "enabled", 00:18:07.479 "thread": "nvmf_tgt_poll_group_000", 00:18:07.479 "listen_address": { 00:18:07.479 "trtype": "TCP", 00:18:07.479 "adrfam": "IPv4", 00:18:07.479 "traddr": "10.0.0.2", 00:18:07.479 "trsvcid": "4420" 00:18:07.479 }, 00:18:07.479 "peer_address": { 00:18:07.479 "trtype": "TCP", 00:18:07.479 "adrfam": "IPv4", 00:18:07.479 "traddr": "10.0.0.1", 00:18:07.479 "trsvcid": "41624" 00:18:07.479 }, 00:18:07.479 "auth": { 00:18:07.479 "state": "completed", 00:18:07.479 "digest": "sha256", 00:18:07.479 "dhgroup": "ffdhe3072" 00:18:07.479 } 00:18:07.479 } 00:18:07.479 ]' 00:18:07.479 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.479 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.479 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.479 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:07.479 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.778 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.778 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.778 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.778 11:44:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MGQ0MGNiZWNlNjJhNTdlMzY3N2U1NmY4ZTkyNzlmODZjZWZmYTg0NGFjNzAyN2IzU/qdXA==: --dhchap-ctrl-secret DHHC-1:03:ZDU5YWNmOWFjMDUxYTA5Yzk1ZDFlOTllOTExODI3NDg5Y2FmZjE0NDI4MGNlYTZmN2RiMDk3ZjcyM2UwN2Q5YfYdcls=: 00:18:08.357 11:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.357 11:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:08.357 11:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.357 11:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.357 11:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.357 11:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.357 11:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:08.357 11:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:08.617 11:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:08.617 11:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.617 11:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:08.617 11:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:08.617 11:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:08.617 11:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.617 11:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.617 11:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.617 11:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.617 11:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.617 11:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.617 11:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.876 00:18:08.876 11:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.876 11:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.876 11:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.135 11:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.135 11:44:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.135 11:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.135 11:44:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.135 11:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.135 11:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.135 { 00:18:09.135 "cntlid": 19, 00:18:09.135 "qid": 0, 00:18:09.135 "state": "enabled", 00:18:09.135 "thread": "nvmf_tgt_poll_group_000", 00:18:09.135 "listen_address": { 00:18:09.135 "trtype": "TCP", 00:18:09.135 "adrfam": "IPv4", 00:18:09.135 "traddr": "10.0.0.2", 00:18:09.135 "trsvcid": "4420" 00:18:09.135 }, 00:18:09.135 "peer_address": { 00:18:09.135 "trtype": "TCP", 00:18:09.135 "adrfam": "IPv4", 00:18:09.135 "traddr": "10.0.0.1", 00:18:09.135 "trsvcid": "41662" 00:18:09.135 }, 00:18:09.135 "auth": { 00:18:09.135 "state": "completed", 00:18:09.135 "digest": "sha256", 00:18:09.135 "dhgroup": "ffdhe3072" 00:18:09.135 } 00:18:09.135 } 00:18:09.135 ]' 00:18:09.135 11:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.135 11:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.135 11:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.135 11:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:09.135 11:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.135 11:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.135 11:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.135 11:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.395 11:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGQ2NzA4MmNmMDI4ZjU5MTM4ODIyNjc0NTNmZTEzMWEKbvth: --dhchap-ctrl-secret DHHC-1:02:NjBkYzI5NjI2MWUyYWZmZGRiZTI4NThkNzZmZjJkOWQ5MWJmNmQyYzkzMmM1NDdjmVBXQg==: 00:18:09.965 11:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.965 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.965 11:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:09.965 11:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.965 11:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.965 11:44:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.965 11:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.965 11:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:09.965 11:44:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:09.965 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:09.965 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.965 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:09.965 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:09.965 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:09.965 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.965 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.965 11:44:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.965 11:44:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.965 11:44:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.965 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.965 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.224 00:18:10.224 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.224 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.224 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.484 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.484 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.484 11:44:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.484 11:44:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.484 11:44:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.484 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.484 { 00:18:10.484 "cntlid": 21, 00:18:10.484 "qid": 0, 00:18:10.484 "state": "enabled", 00:18:10.484 "thread": "nvmf_tgt_poll_group_000", 00:18:10.484 "listen_address": { 00:18:10.484 "trtype": "TCP", 00:18:10.484 "adrfam": "IPv4", 00:18:10.484 "traddr": "10.0.0.2", 00:18:10.484 "trsvcid": "4420" 00:18:10.484 }, 00:18:10.484 "peer_address": { 00:18:10.484 "trtype": "TCP", 00:18:10.484 "adrfam": "IPv4", 00:18:10.484 "traddr": "10.0.0.1", 00:18:10.484 "trsvcid": "41688" 00:18:10.484 }, 00:18:10.484 "auth": { 00:18:10.484 "state": "completed", 00:18:10.484 "digest": "sha256", 00:18:10.484 "dhgroup": "ffdhe3072" 00:18:10.484 } 00:18:10.484 } 00:18:10.484 ]' 00:18:10.484 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.484 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.484 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.484 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:10.484 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.742 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.742 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.742 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.742 11:44:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTZmOTE1OTAxNDA5ZDlmZmY5YjU3YzI2YzA4MzBjNTM1NGZlZTI1Njg1YjBmYjI42N6jcw==: --dhchap-ctrl-secret DHHC-1:01:NDhmMTM4OWViODJhOGUyNjc2Y2ZkNGI3YmIyYjQxNWSnSqty: 00:18:11.308 11:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.308 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.308 11:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:11.308 11:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.308 11:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.308 11:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.308 11:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.308 11:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:11.308 11:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:11.566 11:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:11.566 11:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.566 11:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:11.566 11:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:11.566 11:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:11.566 11:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.566 11:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:11.566 11:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.566 11:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.566 11:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.566 11:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.566 11:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.824 00:18:11.824 11:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.824 11:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.824 11:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.082 11:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.082 11:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.082 11:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.082 11:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.082 11:44:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.082 11:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.082 { 00:18:12.082 "cntlid": 23, 00:18:12.082 "qid": 0, 00:18:12.082 "state": "enabled", 00:18:12.082 "thread": "nvmf_tgt_poll_group_000", 00:18:12.082 "listen_address": { 00:18:12.083 "trtype": "TCP", 00:18:12.083 "adrfam": "IPv4", 00:18:12.083 "traddr": "10.0.0.2", 00:18:12.083 "trsvcid": "4420" 00:18:12.083 }, 00:18:12.083 "peer_address": { 00:18:12.083 "trtype": "TCP", 00:18:12.083 "adrfam": "IPv4", 00:18:12.083 "traddr": "10.0.0.1", 00:18:12.083 "trsvcid": "41704" 00:18:12.083 }, 00:18:12.083 "auth": { 00:18:12.083 "state": "completed", 00:18:12.083 "digest": "sha256", 00:18:12.083 "dhgroup": "ffdhe3072" 00:18:12.083 } 00:18:12.083 } 00:18:12.083 ]' 00:18:12.083 11:44:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.083 11:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.083 11:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.083 11:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:12.083 11:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.083 11:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.083 11:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.083 11:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.340 11:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MDYwY2I5Y2Q0NTIwOTEwNDJiMGZlMmFlZThmNTNiNmVhMzAyNzUzZWJiN2ZjNjgyZjQ1MmY0MWRiYThmODNjNaXvLAw=: 00:18:12.903 11:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.903 11:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:12.903 11:44:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.903 11:44:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.903 11:44:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.903 11:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.903 11:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.903 11:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:12.903 11:44:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:13.161 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:13.161 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.161 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.161 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:13.161 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:13.161 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.161 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.161 11:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.161 11:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.161 11:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.161 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.161 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.419 00:18:13.419 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.419 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.419 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.419 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.419 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.419 11:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.419 11:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.419 11:44:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.419 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.419 { 00:18:13.419 "cntlid": 25, 00:18:13.419 "qid": 0, 00:18:13.419 "state": "enabled", 00:18:13.419 "thread": "nvmf_tgt_poll_group_000", 00:18:13.419 "listen_address": { 00:18:13.419 "trtype": "TCP", 00:18:13.419 "adrfam": "IPv4", 00:18:13.419 "traddr": "10.0.0.2", 00:18:13.419 "trsvcid": "4420" 00:18:13.419 }, 00:18:13.419 "peer_address": { 00:18:13.419 "trtype": "TCP", 00:18:13.419 "adrfam": "IPv4", 00:18:13.419 "traddr": "10.0.0.1", 00:18:13.419 "trsvcid": "41742" 00:18:13.419 }, 00:18:13.419 "auth": { 00:18:13.419 "state": "completed", 00:18:13.419 "digest": "sha256", 00:18:13.419 "dhgroup": "ffdhe4096" 00:18:13.419 } 00:18:13.419 } 00:18:13.419 ]' 00:18:13.419 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.676 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.676 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.676 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:13.676 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.676 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.676 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.676 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.934 11:44:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MGQ0MGNiZWNlNjJhNTdlMzY3N2U1NmY4ZTkyNzlmODZjZWZmYTg0NGFjNzAyN2IzU/qdXA==: --dhchap-ctrl-secret DHHC-1:03:ZDU5YWNmOWFjMDUxYTA5Yzk1ZDFlOTllOTExODI3NDg5Y2FmZjE0NDI4MGNlYTZmN2RiMDk3ZjcyM2UwN2Q5YfYdcls=: 00:18:14.501 11:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.501 11:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:14.501 11:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.501 11:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.501 11:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.501 11:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.501 11:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:14.501 11:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:14.501 11:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:14.501 11:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.501 11:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:14.501 11:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:14.501 11:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:14.501 11:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.501 11:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.501 11:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.501 11:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.501 11:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.501 11:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.501 11:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.759 00:18:14.759 11:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.759 11:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.759 11:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:15.017 11:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.017 11:44:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.017 11:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.017 11:44:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.017 11:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.017 11:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.017 { 00:18:15.017 "cntlid": 27, 00:18:15.017 "qid": 0, 00:18:15.017 "state": "enabled", 00:18:15.017 "thread": "nvmf_tgt_poll_group_000", 00:18:15.017 "listen_address": { 00:18:15.017 "trtype": "TCP", 00:18:15.017 "adrfam": "IPv4", 00:18:15.017 "traddr": "10.0.0.2", 00:18:15.017 "trsvcid": "4420" 00:18:15.017 }, 00:18:15.017 "peer_address": { 00:18:15.017 "trtype": "TCP", 00:18:15.017 "adrfam": "IPv4", 00:18:15.017 "traddr": "10.0.0.1", 00:18:15.017 "trsvcid": "52446" 00:18:15.017 }, 00:18:15.017 "auth": { 00:18:15.017 "state": "completed", 00:18:15.017 "digest": "sha256", 00:18:15.017 "dhgroup": "ffdhe4096" 00:18:15.017 } 00:18:15.017 } 00:18:15.017 ]' 00:18:15.017 11:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.017 11:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.017 11:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.017 11:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:15.017 11:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.275 11:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.276 11:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.276 11:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.276 11:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGQ2NzA4MmNmMDI4ZjU5MTM4ODIyNjc0NTNmZTEzMWEKbvth: --dhchap-ctrl-secret DHHC-1:02:NjBkYzI5NjI2MWUyYWZmZGRiZTI4NThkNzZmZjJkOWQ5MWJmNmQyYzkzMmM1NDdjmVBXQg==: 00:18:15.842 11:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.842 11:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:15.842 11:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.842 11:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.842 11:44:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.842 11:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.842 11:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:15.842 11:44:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:16.101 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:16.101 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.101 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:16.101 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:16.101 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:16.101 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.101 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.101 11:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.101 11:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.101 11:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.101 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.101 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.359 00:18:16.359 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.359 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.359 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.618 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.618 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.618 11:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.618 11:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.618 11:44:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.618 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.618 { 00:18:16.618 "cntlid": 29, 00:18:16.618 "qid": 0, 00:18:16.618 "state": "enabled", 00:18:16.618 "thread": "nvmf_tgt_poll_group_000", 00:18:16.618 "listen_address": { 00:18:16.618 "trtype": "TCP", 00:18:16.618 "adrfam": "IPv4", 00:18:16.618 "traddr": "10.0.0.2", 00:18:16.618 "trsvcid": "4420" 00:18:16.618 }, 00:18:16.618 "peer_address": { 00:18:16.618 "trtype": "TCP", 00:18:16.618 "adrfam": "IPv4", 00:18:16.618 "traddr": "10.0.0.1", 00:18:16.618 "trsvcid": "52470" 00:18:16.618 }, 00:18:16.618 "auth": { 00:18:16.618 "state": "completed", 00:18:16.618 "digest": "sha256", 00:18:16.618 "dhgroup": "ffdhe4096" 00:18:16.618 } 00:18:16.618 } 00:18:16.618 ]' 00:18:16.618 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.618 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.618 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.618 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:16.618 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.618 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.618 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.618 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.877 11:44:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTZmOTE1OTAxNDA5ZDlmZmY5YjU3YzI2YzA4MzBjNTM1NGZlZTI1Njg1YjBmYjI42N6jcw==: --dhchap-ctrl-secret DHHC-1:01:NDhmMTM4OWViODJhOGUyNjc2Y2ZkNGI3YmIyYjQxNWSnSqty: 00:18:17.442 11:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.443 11:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:17.443 11:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.443 11:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.443 11:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.443 11:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.443 11:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:17.443 11:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:17.443 11:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:17.443 11:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.443 11:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:17.443 11:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:17.443 11:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:17.443 11:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.443 11:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:17.443 11:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.443 11:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.443 11:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.443 11:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.443 11:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:17.700 00:18:17.700 11:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.700 11:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.700 11:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.958 11:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.958 11:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.958 11:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.958 11:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.958 11:44:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.958 11:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:17.958 { 00:18:17.958 "cntlid": 31, 00:18:17.958 "qid": 0, 00:18:17.958 "state": "enabled", 00:18:17.958 "thread": "nvmf_tgt_poll_group_000", 00:18:17.958 "listen_address": { 00:18:17.958 "trtype": "TCP", 00:18:17.958 "adrfam": "IPv4", 00:18:17.959 "traddr": "10.0.0.2", 00:18:17.959 "trsvcid": "4420" 00:18:17.959 }, 00:18:17.959 "peer_address": { 00:18:17.959 "trtype": "TCP", 00:18:17.959 "adrfam": "IPv4", 00:18:17.959 "traddr": "10.0.0.1", 00:18:17.959 "trsvcid": "52490" 00:18:17.959 }, 00:18:17.959 "auth": { 00:18:17.959 "state": "completed", 00:18:17.959 "digest": "sha256", 00:18:17.959 "dhgroup": "ffdhe4096" 00:18:17.959 } 00:18:17.959 } 00:18:17.959 ]' 00:18:17.959 11:44:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:17.959 11:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.959 11:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.217 11:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:18.217 11:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.217 11:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.217 11:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.217 11:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.217 11:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MDYwY2I5Y2Q0NTIwOTEwNDJiMGZlMmFlZThmNTNiNmVhMzAyNzUzZWJiN2ZjNjgyZjQ1MmY0MWRiYThmODNjNaXvLAw=: 00:18:18.783 11:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.783 11:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:18.783 11:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.783 11:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.783 11:44:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.783 11:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:18.783 11:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:18.783 11:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:18.783 11:44:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:19.042 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:19.042 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.042 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:19.042 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:19.042 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:19.042 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.042 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.042 11:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.042 11:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.042 11:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.042 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.042 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.300 00:18:19.300 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.300 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.300 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.558 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.558 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.558 11:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.558 11:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.558 11:44:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.558 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.558 { 00:18:19.558 "cntlid": 33, 00:18:19.558 "qid": 0, 00:18:19.558 "state": "enabled", 00:18:19.558 "thread": "nvmf_tgt_poll_group_000", 00:18:19.558 "listen_address": { 00:18:19.558 "trtype": "TCP", 00:18:19.558 "adrfam": "IPv4", 00:18:19.558 "traddr": "10.0.0.2", 00:18:19.558 "trsvcid": "4420" 00:18:19.558 }, 00:18:19.558 "peer_address": { 00:18:19.558 "trtype": "TCP", 00:18:19.558 "adrfam": "IPv4", 00:18:19.558 "traddr": "10.0.0.1", 00:18:19.558 "trsvcid": "52520" 00:18:19.558 }, 00:18:19.558 "auth": { 00:18:19.558 "state": "completed", 00:18:19.558 "digest": "sha256", 00:18:19.558 "dhgroup": "ffdhe6144" 00:18:19.558 } 00:18:19.558 } 00:18:19.558 ]' 00:18:19.558 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.558 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.558 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.558 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:19.558 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.558 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.558 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.558 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.817 11:44:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MGQ0MGNiZWNlNjJhNTdlMzY3N2U1NmY4ZTkyNzlmODZjZWZmYTg0NGFjNzAyN2IzU/qdXA==: --dhchap-ctrl-secret DHHC-1:03:ZDU5YWNmOWFjMDUxYTA5Yzk1ZDFlOTllOTExODI3NDg5Y2FmZjE0NDI4MGNlYTZmN2RiMDk3ZjcyM2UwN2Q5YfYdcls=: 00:18:20.384 11:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.384 11:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:20.384 11:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.384 11:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.384 11:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.384 11:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.384 11:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:20.384 11:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:20.642 11:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:20.642 11:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.642 11:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:20.642 11:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:20.642 11:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:20.642 11:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.642 11:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.642 11:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.642 11:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.642 11:44:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.642 11:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.642 11:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.900 00:18:20.900 11:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.900 11:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.900 11:44:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.158 11:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.158 11:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.158 11:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.158 11:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.158 11:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.158 11:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.158 { 00:18:21.158 "cntlid": 35, 00:18:21.158 "qid": 0, 00:18:21.158 "state": "enabled", 00:18:21.158 "thread": "nvmf_tgt_poll_group_000", 00:18:21.158 "listen_address": { 00:18:21.158 "trtype": "TCP", 00:18:21.158 "adrfam": "IPv4", 00:18:21.158 "traddr": "10.0.0.2", 00:18:21.158 "trsvcid": "4420" 00:18:21.158 }, 00:18:21.158 "peer_address": { 00:18:21.158 "trtype": "TCP", 00:18:21.158 "adrfam": "IPv4", 00:18:21.158 "traddr": "10.0.0.1", 00:18:21.158 "trsvcid": "52538" 00:18:21.158 }, 00:18:21.158 "auth": { 00:18:21.158 "state": "completed", 00:18:21.158 "digest": "sha256", 00:18:21.158 "dhgroup": "ffdhe6144" 00:18:21.158 } 00:18:21.158 } 00:18:21.158 ]' 00:18:21.158 11:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.158 11:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.158 11:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.159 11:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:21.159 11:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.159 11:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.159 11:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.159 11:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.417 11:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGQ2NzA4MmNmMDI4ZjU5MTM4ODIyNjc0NTNmZTEzMWEKbvth: --dhchap-ctrl-secret DHHC-1:02:NjBkYzI5NjI2MWUyYWZmZGRiZTI4NThkNzZmZjJkOWQ5MWJmNmQyYzkzMmM1NDdjmVBXQg==: 00:18:21.984 11:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.984 11:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:21.984 11:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.984 11:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.984 11:44:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.984 11:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.984 11:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:21.984 11:44:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:22.242 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:22.242 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.242 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:22.242 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:22.242 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:22.242 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.242 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.242 11:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.242 11:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.242 11:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.242 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.242 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.501 00:18:22.501 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.501 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.501 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.759 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.759 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.759 11:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.760 11:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.760 11:44:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.760 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.760 { 00:18:22.760 "cntlid": 37, 00:18:22.760 "qid": 0, 00:18:22.760 "state": "enabled", 00:18:22.760 "thread": "nvmf_tgt_poll_group_000", 00:18:22.760 "listen_address": { 00:18:22.760 "trtype": "TCP", 00:18:22.760 "adrfam": "IPv4", 00:18:22.760 "traddr": "10.0.0.2", 00:18:22.760 "trsvcid": "4420" 00:18:22.760 }, 00:18:22.760 "peer_address": { 00:18:22.760 "trtype": "TCP", 00:18:22.760 "adrfam": "IPv4", 00:18:22.760 "traddr": "10.0.0.1", 00:18:22.760 "trsvcid": "52574" 00:18:22.760 }, 00:18:22.760 "auth": { 00:18:22.760 "state": "completed", 00:18:22.760 "digest": "sha256", 00:18:22.760 "dhgroup": "ffdhe6144" 00:18:22.760 } 00:18:22.760 } 00:18:22.760 ]' 00:18:22.760 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.760 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.760 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.760 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:22.760 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.760 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.760 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.760 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.018 11:44:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTZmOTE1OTAxNDA5ZDlmZmY5YjU3YzI2YzA4MzBjNTM1NGZlZTI1Njg1YjBmYjI42N6jcw==: --dhchap-ctrl-secret DHHC-1:01:NDhmMTM4OWViODJhOGUyNjc2Y2ZkNGI3YmIyYjQxNWSnSqty: 00:18:23.586 11:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.586 11:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:23.586 11:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.586 11:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.586 11:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.586 11:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.586 11:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:23.586 11:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:23.868 11:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:23.868 11:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.868 11:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:23.868 11:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:23.868 11:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:23.868 11:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.868 11:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:23.868 11:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.868 11:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.868 11:44:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.868 11:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.868 11:44:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:24.127 00:18:24.127 11:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.127 11:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.127 11:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.127 11:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.127 11:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.127 11:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.127 11:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.127 11:44:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.127 11:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.127 { 00:18:24.127 "cntlid": 39, 00:18:24.127 "qid": 0, 00:18:24.127 "state": "enabled", 00:18:24.127 "thread": "nvmf_tgt_poll_group_000", 00:18:24.127 "listen_address": { 00:18:24.127 "trtype": "TCP", 00:18:24.127 "adrfam": "IPv4", 00:18:24.127 "traddr": "10.0.0.2", 00:18:24.127 "trsvcid": "4420" 00:18:24.127 }, 00:18:24.127 "peer_address": { 00:18:24.127 "trtype": "TCP", 00:18:24.127 "adrfam": "IPv4", 00:18:24.127 "traddr": "10.0.0.1", 00:18:24.127 "trsvcid": "50058" 00:18:24.127 }, 00:18:24.127 "auth": { 00:18:24.127 "state": "completed", 00:18:24.127 "digest": "sha256", 00:18:24.127 "dhgroup": "ffdhe6144" 00:18:24.127 } 00:18:24.127 } 00:18:24.127 ]' 00:18:24.127 11:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.388 11:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:24.388 11:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.388 11:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:24.388 11:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.388 11:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.388 11:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.388 11:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.696 11:44:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MDYwY2I5Y2Q0NTIwOTEwNDJiMGZlMmFlZThmNTNiNmVhMzAyNzUzZWJiN2ZjNjgyZjQ1MmY0MWRiYThmODNjNaXvLAw=: 00:18:24.956 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.956 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:24.956 11:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.956 11:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.956 11:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.956 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:24.956 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.956 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:24.956 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:25.216 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:25.216 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:25.216 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:25.216 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:25.216 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:25.216 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.216 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.216 11:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.216 11:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.216 11:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.216 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.216 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:25.785 00:18:25.785 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.785 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.785 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.785 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.785 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.785 11:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.785 11:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.785 11:44:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.785 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.785 { 00:18:25.785 "cntlid": 41, 00:18:25.785 "qid": 0, 00:18:25.785 "state": "enabled", 00:18:25.785 "thread": "nvmf_tgt_poll_group_000", 00:18:25.785 "listen_address": { 00:18:25.785 "trtype": "TCP", 00:18:25.785 "adrfam": "IPv4", 00:18:25.785 "traddr": "10.0.0.2", 00:18:25.785 "trsvcid": "4420" 00:18:25.785 }, 00:18:25.785 "peer_address": { 00:18:25.785 "trtype": "TCP", 00:18:25.785 "adrfam": "IPv4", 00:18:25.785 "traddr": "10.0.0.1", 00:18:25.785 "trsvcid": "50090" 00:18:25.785 }, 00:18:25.785 "auth": { 00:18:25.785 "state": "completed", 00:18:25.785 "digest": "sha256", 00:18:25.785 "dhgroup": "ffdhe8192" 00:18:25.785 } 00:18:25.785 } 00:18:25.785 ]' 00:18:25.785 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.044 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:26.044 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.044 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:26.044 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.044 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.044 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.044 11:44:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.304 11:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MGQ0MGNiZWNlNjJhNTdlMzY3N2U1NmY4ZTkyNzlmODZjZWZmYTg0NGFjNzAyN2IzU/qdXA==: --dhchap-ctrl-secret DHHC-1:03:ZDU5YWNmOWFjMDUxYTA5Yzk1ZDFlOTllOTExODI3NDg5Y2FmZjE0NDI4MGNlYTZmN2RiMDk3ZjcyM2UwN2Q5YfYdcls=: 00:18:26.872 11:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.872 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.872 11:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:26.872 11:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.872 11:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.872 11:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.872 11:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.872 11:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:26.872 11:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:26.872 11:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:26.872 11:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.872 11:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:26.872 11:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:26.872 11:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:26.872 11:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.872 11:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.872 11:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.872 11:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.872 11:44:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.872 11:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.872 11:44:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.440 00:18:27.440 11:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.440 11:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.440 11:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.699 11:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.699 11:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.699 11:44:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.699 11:44:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.699 11:44:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.699 11:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.699 { 00:18:27.699 "cntlid": 43, 00:18:27.699 "qid": 0, 00:18:27.699 "state": "enabled", 00:18:27.699 "thread": "nvmf_tgt_poll_group_000", 00:18:27.699 "listen_address": { 00:18:27.699 "trtype": "TCP", 00:18:27.699 "adrfam": "IPv4", 00:18:27.699 "traddr": "10.0.0.2", 00:18:27.699 "trsvcid": "4420" 00:18:27.699 }, 00:18:27.699 "peer_address": { 00:18:27.699 "trtype": "TCP", 00:18:27.699 "adrfam": "IPv4", 00:18:27.699 "traddr": "10.0.0.1", 00:18:27.699 "trsvcid": "50126" 00:18:27.699 }, 00:18:27.699 "auth": { 00:18:27.699 "state": "completed", 00:18:27.699 "digest": "sha256", 00:18:27.699 "dhgroup": "ffdhe8192" 00:18:27.699 } 00:18:27.699 } 00:18:27.699 ]' 00:18:27.699 11:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:27.699 11:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.699 11:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:27.699 11:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:27.699 11:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.699 11:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.699 11:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.699 11:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.958 11:44:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGQ2NzA4MmNmMDI4ZjU5MTM4ODIyNjc0NTNmZTEzMWEKbvth: --dhchap-ctrl-secret DHHC-1:02:NjBkYzI5NjI2MWUyYWZmZGRiZTI4NThkNzZmZjJkOWQ5MWJmNmQyYzkzMmM1NDdjmVBXQg==: 00:18:28.527 11:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.527 11:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:28.527 11:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.527 11:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.527 11:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.527 11:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.527 11:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:28.527 11:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:28.527 11:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:28.527 11:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.527 11:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:28.527 11:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:28.527 11:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:28.527 11:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.527 11:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.527 11:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.527 11:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.527 11:44:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.527 11:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.527 11:44:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.096 00:18:29.096 11:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.096 11:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.096 11:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.355 11:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.355 11:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.355 11:44:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.355 11:44:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.355 11:44:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.355 11:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.355 { 00:18:29.355 "cntlid": 45, 00:18:29.355 "qid": 0, 00:18:29.355 "state": "enabled", 00:18:29.355 "thread": "nvmf_tgt_poll_group_000", 00:18:29.355 "listen_address": { 00:18:29.355 "trtype": "TCP", 00:18:29.355 "adrfam": "IPv4", 00:18:29.355 "traddr": "10.0.0.2", 00:18:29.355 "trsvcid": "4420" 00:18:29.355 }, 00:18:29.355 "peer_address": { 00:18:29.355 "trtype": "TCP", 00:18:29.355 "adrfam": "IPv4", 00:18:29.355 "traddr": "10.0.0.1", 00:18:29.355 "trsvcid": "50152" 00:18:29.355 }, 00:18:29.355 "auth": { 00:18:29.355 "state": "completed", 00:18:29.355 "digest": "sha256", 00:18:29.355 "dhgroup": "ffdhe8192" 00:18:29.355 } 00:18:29.355 } 00:18:29.355 ]' 00:18:29.355 11:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.355 11:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:29.356 11:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.356 11:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:29.356 11:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.356 11:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.356 11:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.356 11:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.614 11:44:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTZmOTE1OTAxNDA5ZDlmZmY5YjU3YzI2YzA4MzBjNTM1NGZlZTI1Njg1YjBmYjI42N6jcw==: --dhchap-ctrl-secret DHHC-1:01:NDhmMTM4OWViODJhOGUyNjc2Y2ZkNGI3YmIyYjQxNWSnSqty: 00:18:30.183 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.183 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:30.183 11:44:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.183 11:44:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.183 11:44:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.183 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.183 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:30.183 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:30.183 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:30.183 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.183 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:30.183 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:30.183 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:30.183 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.183 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:30.183 11:44:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.183 11:44:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.183 11:44:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.183 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.183 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.750 00:18:30.750 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.750 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.750 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.009 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.009 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.009 11:44:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.009 11:44:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.009 11:44:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.009 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.009 { 00:18:31.009 "cntlid": 47, 00:18:31.009 "qid": 0, 00:18:31.009 "state": "enabled", 00:18:31.009 "thread": "nvmf_tgt_poll_group_000", 00:18:31.009 "listen_address": { 00:18:31.009 "trtype": "TCP", 00:18:31.009 "adrfam": "IPv4", 00:18:31.009 "traddr": "10.0.0.2", 00:18:31.009 "trsvcid": "4420" 00:18:31.009 }, 00:18:31.009 "peer_address": { 00:18:31.009 "trtype": "TCP", 00:18:31.009 "adrfam": "IPv4", 00:18:31.009 "traddr": "10.0.0.1", 00:18:31.009 "trsvcid": "50162" 00:18:31.009 }, 00:18:31.009 "auth": { 00:18:31.009 "state": "completed", 00:18:31.009 "digest": "sha256", 00:18:31.009 "dhgroup": "ffdhe8192" 00:18:31.009 } 00:18:31.009 } 00:18:31.009 ]' 00:18:31.009 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.009 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:31.009 11:44:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.009 11:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:31.009 11:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.009 11:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.009 11:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.009 11:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.266 11:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MDYwY2I5Y2Q0NTIwOTEwNDJiMGZlMmFlZThmNTNiNmVhMzAyNzUzZWJiN2ZjNjgyZjQ1MmY0MWRiYThmODNjNaXvLAw=: 00:18:31.832 11:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.832 11:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:31.832 11:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.832 11:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.832 11:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.832 11:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:31.832 11:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:31.832 11:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.832 11:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:31.832 11:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:31.832 11:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:31.832 11:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.832 11:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.832 11:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:31.832 11:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:31.832 11:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.832 11:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.832 11:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.832 11:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.832 11:44:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.832 11:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.832 11:44:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.090 00:18:32.090 11:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.090 11:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.090 11:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.348 11:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.348 11:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.348 11:45:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.348 11:45:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.348 11:45:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.348 11:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.348 { 00:18:32.348 "cntlid": 49, 00:18:32.348 "qid": 0, 00:18:32.348 "state": "enabled", 00:18:32.348 "thread": "nvmf_tgt_poll_group_000", 00:18:32.348 "listen_address": { 00:18:32.348 "trtype": "TCP", 00:18:32.348 "adrfam": "IPv4", 00:18:32.348 "traddr": "10.0.0.2", 00:18:32.348 "trsvcid": "4420" 00:18:32.348 }, 00:18:32.348 "peer_address": { 00:18:32.348 "trtype": "TCP", 00:18:32.348 "adrfam": "IPv4", 00:18:32.348 "traddr": "10.0.0.1", 00:18:32.348 "trsvcid": "50194" 00:18:32.348 }, 00:18:32.348 "auth": { 00:18:32.348 "state": "completed", 00:18:32.348 "digest": "sha384", 00:18:32.348 "dhgroup": "null" 00:18:32.348 } 00:18:32.348 } 00:18:32.348 ]' 00:18:32.348 11:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.348 11:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:32.348 11:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.348 11:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:32.348 11:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.606 11:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.606 11:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.606 11:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.606 11:45:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MGQ0MGNiZWNlNjJhNTdlMzY3N2U1NmY4ZTkyNzlmODZjZWZmYTg0NGFjNzAyN2IzU/qdXA==: --dhchap-ctrl-secret DHHC-1:03:ZDU5YWNmOWFjMDUxYTA5Yzk1ZDFlOTllOTExODI3NDg5Y2FmZjE0NDI4MGNlYTZmN2RiMDk3ZjcyM2UwN2Q5YfYdcls=: 00:18:33.173 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.174 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:33.174 11:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.174 11:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.174 11:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.174 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.174 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:33.174 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:33.433 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:33.433 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.433 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:33.433 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:33.433 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:33.433 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.433 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.433 11:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.433 11:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.433 11:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.433 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.433 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:33.692 00:18:33.692 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.692 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.692 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.692 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.692 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.692 11:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.692 11:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.952 11:45:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.952 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.952 { 00:18:33.952 "cntlid": 51, 00:18:33.952 "qid": 0, 00:18:33.952 "state": "enabled", 00:18:33.952 "thread": "nvmf_tgt_poll_group_000", 00:18:33.952 "listen_address": { 00:18:33.952 "trtype": "TCP", 00:18:33.952 "adrfam": "IPv4", 00:18:33.952 "traddr": "10.0.0.2", 00:18:33.952 "trsvcid": "4420" 00:18:33.952 }, 00:18:33.952 "peer_address": { 00:18:33.952 "trtype": "TCP", 00:18:33.952 "adrfam": "IPv4", 00:18:33.952 "traddr": "10.0.0.1", 00:18:33.952 "trsvcid": "50220" 00:18:33.952 }, 00:18:33.952 "auth": { 00:18:33.952 "state": "completed", 00:18:33.952 "digest": "sha384", 00:18:33.952 "dhgroup": "null" 00:18:33.952 } 00:18:33.952 } 00:18:33.952 ]' 00:18:33.952 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.952 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.952 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.952 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:33.952 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.952 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.952 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.952 11:45:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.211 11:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGQ2NzA4MmNmMDI4ZjU5MTM4ODIyNjc0NTNmZTEzMWEKbvth: --dhchap-ctrl-secret DHHC-1:02:NjBkYzI5NjI2MWUyYWZmZGRiZTI4NThkNzZmZjJkOWQ5MWJmNmQyYzkzMmM1NDdjmVBXQg==: 00:18:34.779 11:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.779 11:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:34.779 11:45:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.779 11:45:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.779 11:45:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.779 11:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:34.779 11:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:34.779 11:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:34.779 11:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:34.779 11:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.779 11:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:34.779 11:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:34.779 11:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:34.779 11:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.780 11:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.780 11:45:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.780 11:45:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.780 11:45:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.780 11:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.780 11:45:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.040 00:18:35.040 11:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.040 11:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.040 11:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.300 11:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.300 11:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.300 11:45:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.300 11:45:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.300 11:45:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.300 11:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:35.300 { 00:18:35.300 "cntlid": 53, 00:18:35.300 "qid": 0, 00:18:35.300 "state": "enabled", 00:18:35.300 "thread": "nvmf_tgt_poll_group_000", 00:18:35.300 "listen_address": { 00:18:35.300 "trtype": "TCP", 00:18:35.300 "adrfam": "IPv4", 00:18:35.300 "traddr": "10.0.0.2", 00:18:35.300 "trsvcid": "4420" 00:18:35.300 }, 00:18:35.300 "peer_address": { 00:18:35.300 "trtype": "TCP", 00:18:35.300 "adrfam": "IPv4", 00:18:35.300 "traddr": "10.0.0.1", 00:18:35.300 "trsvcid": "54092" 00:18:35.300 }, 00:18:35.300 "auth": { 00:18:35.300 "state": "completed", 00:18:35.300 "digest": "sha384", 00:18:35.300 "dhgroup": "null" 00:18:35.300 } 00:18:35.300 } 00:18:35.300 ]' 00:18:35.300 11:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:35.300 11:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:35.300 11:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:35.300 11:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:35.300 11:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:35.300 11:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.300 11:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.300 11:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.558 11:45:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTZmOTE1OTAxNDA5ZDlmZmY5YjU3YzI2YzA4MzBjNTM1NGZlZTI1Njg1YjBmYjI42N6jcw==: --dhchap-ctrl-secret DHHC-1:01:NDhmMTM4OWViODJhOGUyNjc2Y2ZkNGI3YmIyYjQxNWSnSqty: 00:18:36.126 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.126 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:36.126 11:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.126 11:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.126 11:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.126 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.126 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:36.126 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:36.386 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:36.386 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.386 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:36.386 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:36.386 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:36.386 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.386 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:36.386 11:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.386 11:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.386 11:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.386 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:36.386 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:36.386 00:18:36.645 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.645 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.645 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.645 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.645 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.645 11:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.645 11:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.645 11:45:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.645 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.645 { 00:18:36.645 "cntlid": 55, 00:18:36.645 "qid": 0, 00:18:36.645 "state": "enabled", 00:18:36.645 "thread": "nvmf_tgt_poll_group_000", 00:18:36.645 "listen_address": { 00:18:36.645 "trtype": "TCP", 00:18:36.645 "adrfam": "IPv4", 00:18:36.645 "traddr": "10.0.0.2", 00:18:36.645 "trsvcid": "4420" 00:18:36.645 }, 00:18:36.645 "peer_address": { 00:18:36.645 "trtype": "TCP", 00:18:36.645 "adrfam": "IPv4", 00:18:36.645 "traddr": "10.0.0.1", 00:18:36.645 "trsvcid": "54104" 00:18:36.645 }, 00:18:36.645 "auth": { 00:18:36.645 "state": "completed", 00:18:36.645 "digest": "sha384", 00:18:36.645 "dhgroup": "null" 00:18:36.645 } 00:18:36.645 } 00:18:36.645 ]' 00:18:36.645 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.645 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.645 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.645 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:36.645 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.904 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.904 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.904 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.904 11:45:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MDYwY2I5Y2Q0NTIwOTEwNDJiMGZlMmFlZThmNTNiNmVhMzAyNzUzZWJiN2ZjNjgyZjQ1MmY0MWRiYThmODNjNaXvLAw=: 00:18:37.472 11:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.472 11:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:37.472 11:45:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.472 11:45:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.472 11:45:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.472 11:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.472 11:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.472 11:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:37.472 11:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:37.732 11:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:37.732 11:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.732 11:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:37.732 11:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:37.732 11:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:37.732 11:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.732 11:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.732 11:45:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.732 11:45:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.732 11:45:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.732 11:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.732 11:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.991 00:18:37.991 11:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.991 11:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.991 11:45:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:38.250 11:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.250 11:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.250 11:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.250 11:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.250 11:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.250 11:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:38.250 { 00:18:38.250 "cntlid": 57, 00:18:38.250 "qid": 0, 00:18:38.250 "state": "enabled", 00:18:38.250 "thread": "nvmf_tgt_poll_group_000", 00:18:38.250 "listen_address": { 00:18:38.250 "trtype": "TCP", 00:18:38.250 "adrfam": "IPv4", 00:18:38.250 "traddr": "10.0.0.2", 00:18:38.250 "trsvcid": "4420" 00:18:38.250 }, 00:18:38.250 "peer_address": { 00:18:38.250 "trtype": "TCP", 00:18:38.250 "adrfam": "IPv4", 00:18:38.250 "traddr": "10.0.0.1", 00:18:38.250 "trsvcid": "54122" 00:18:38.250 }, 00:18:38.250 "auth": { 00:18:38.250 "state": "completed", 00:18:38.250 "digest": "sha384", 00:18:38.250 "dhgroup": "ffdhe2048" 00:18:38.250 } 00:18:38.250 } 00:18:38.250 ]' 00:18:38.250 11:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.250 11:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.250 11:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.250 11:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:38.250 11:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.250 11:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.250 11:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.250 11:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.509 11:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MGQ0MGNiZWNlNjJhNTdlMzY3N2U1NmY4ZTkyNzlmODZjZWZmYTg0NGFjNzAyN2IzU/qdXA==: --dhchap-ctrl-secret DHHC-1:03:ZDU5YWNmOWFjMDUxYTA5Yzk1ZDFlOTllOTExODI3NDg5Y2FmZjE0NDI4MGNlYTZmN2RiMDk3ZjcyM2UwN2Q5YfYdcls=: 00:18:39.079 11:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.079 11:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:39.079 11:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.079 11:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.079 11:45:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.079 11:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.079 11:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:39.079 11:45:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:39.079 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:39.079 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.079 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:39.079 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:39.079 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:39.079 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.079 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.079 11:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.079 11:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.079 11:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.079 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.079 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.338 00:18:39.338 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.338 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.338 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.598 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.598 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.598 11:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.598 11:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.598 11:45:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.598 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.598 { 00:18:39.598 "cntlid": 59, 00:18:39.598 "qid": 0, 00:18:39.598 "state": "enabled", 00:18:39.598 "thread": "nvmf_tgt_poll_group_000", 00:18:39.598 "listen_address": { 00:18:39.598 "trtype": "TCP", 00:18:39.598 "adrfam": "IPv4", 00:18:39.598 "traddr": "10.0.0.2", 00:18:39.598 "trsvcid": "4420" 00:18:39.598 }, 00:18:39.598 "peer_address": { 00:18:39.598 "trtype": "TCP", 00:18:39.598 "adrfam": "IPv4", 00:18:39.598 "traddr": "10.0.0.1", 00:18:39.598 "trsvcid": "54138" 00:18:39.598 }, 00:18:39.598 "auth": { 00:18:39.598 "state": "completed", 00:18:39.598 "digest": "sha384", 00:18:39.598 "dhgroup": "ffdhe2048" 00:18:39.598 } 00:18:39.598 } 00:18:39.598 ]' 00:18:39.598 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.598 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.598 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.598 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:39.598 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.598 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.598 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.598 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.857 11:45:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGQ2NzA4MmNmMDI4ZjU5MTM4ODIyNjc0NTNmZTEzMWEKbvth: --dhchap-ctrl-secret DHHC-1:02:NjBkYzI5NjI2MWUyYWZmZGRiZTI4NThkNzZmZjJkOWQ5MWJmNmQyYzkzMmM1NDdjmVBXQg==: 00:18:40.425 11:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.425 11:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:40.425 11:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.425 11:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.425 11:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.425 11:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.425 11:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:40.425 11:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:40.685 11:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:40.685 11:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.685 11:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:40.685 11:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:40.685 11:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:40.685 11:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.685 11:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.685 11:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.685 11:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.685 11:45:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.685 11:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.685 11:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.944 00:18:40.944 11:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.944 11:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.944 11:45:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.944 11:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.944 11:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.944 11:45:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.944 11:45:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.944 11:45:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.944 11:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.944 { 00:18:40.944 "cntlid": 61, 00:18:40.944 "qid": 0, 00:18:40.944 "state": "enabled", 00:18:40.944 "thread": "nvmf_tgt_poll_group_000", 00:18:40.944 "listen_address": { 00:18:40.944 "trtype": "TCP", 00:18:40.944 "adrfam": "IPv4", 00:18:40.944 "traddr": "10.0.0.2", 00:18:40.944 "trsvcid": "4420" 00:18:40.944 }, 00:18:40.944 "peer_address": { 00:18:40.944 "trtype": "TCP", 00:18:40.944 "adrfam": "IPv4", 00:18:40.944 "traddr": "10.0.0.1", 00:18:40.944 "trsvcid": "54156" 00:18:40.944 }, 00:18:40.944 "auth": { 00:18:40.944 "state": "completed", 00:18:40.944 "digest": "sha384", 00:18:40.944 "dhgroup": "ffdhe2048" 00:18:40.944 } 00:18:40.944 } 00:18:40.944 ]' 00:18:40.944 11:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.203 11:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.203 11:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.203 11:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:41.203 11:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.203 11:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.203 11:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.203 11:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.484 11:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTZmOTE1OTAxNDA5ZDlmZmY5YjU3YzI2YzA4MzBjNTM1NGZlZTI1Njg1YjBmYjI42N6jcw==: --dhchap-ctrl-secret DHHC-1:01:NDhmMTM4OWViODJhOGUyNjc2Y2ZkNGI3YmIyYjQxNWSnSqty: 00:18:41.767 11:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.768 11:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:41.768 11:45:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.768 11:45:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.768 11:45:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.768 11:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.768 11:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:41.768 11:45:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:42.026 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:42.026 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.026 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:42.026 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:42.026 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:42.026 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.026 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:42.026 11:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.026 11:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.026 11:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.026 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.026 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.284 00:18:42.284 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.284 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.284 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.543 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.543 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.543 11:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.543 11:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.543 11:45:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.543 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.543 { 00:18:42.543 "cntlid": 63, 00:18:42.543 "qid": 0, 00:18:42.543 "state": "enabled", 00:18:42.543 "thread": "nvmf_tgt_poll_group_000", 00:18:42.543 "listen_address": { 00:18:42.543 "trtype": "TCP", 00:18:42.543 "adrfam": "IPv4", 00:18:42.543 "traddr": "10.0.0.2", 00:18:42.543 "trsvcid": "4420" 00:18:42.543 }, 00:18:42.543 "peer_address": { 00:18:42.543 "trtype": "TCP", 00:18:42.543 "adrfam": "IPv4", 00:18:42.543 "traddr": "10.0.0.1", 00:18:42.543 "trsvcid": "54184" 00:18:42.543 }, 00:18:42.543 "auth": { 00:18:42.543 "state": "completed", 00:18:42.543 "digest": "sha384", 00:18:42.543 "dhgroup": "ffdhe2048" 00:18:42.543 } 00:18:42.543 } 00:18:42.543 ]' 00:18:42.543 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.543 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:42.543 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.543 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:42.543 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.543 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.543 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.543 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.802 11:45:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MDYwY2I5Y2Q0NTIwOTEwNDJiMGZlMmFlZThmNTNiNmVhMzAyNzUzZWJiN2ZjNjgyZjQ1MmY0MWRiYThmODNjNaXvLAw=: 00:18:43.369 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.369 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.369 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:43.369 11:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.369 11:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.369 11:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.369 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.369 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.369 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:43.369 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:43.369 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:43.369 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.369 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:43.369 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:43.369 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:43.369 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.369 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.369 11:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.369 11:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.628 11:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.628 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.628 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:43.628 00:18:43.628 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.628 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.628 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.887 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.887 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.887 11:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.887 11:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.887 11:45:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.887 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.887 { 00:18:43.887 "cntlid": 65, 00:18:43.887 "qid": 0, 00:18:43.887 "state": "enabled", 00:18:43.887 "thread": "nvmf_tgt_poll_group_000", 00:18:43.887 "listen_address": { 00:18:43.887 "trtype": "TCP", 00:18:43.887 "adrfam": "IPv4", 00:18:43.887 "traddr": "10.0.0.2", 00:18:43.887 "trsvcid": "4420" 00:18:43.887 }, 00:18:43.887 "peer_address": { 00:18:43.887 "trtype": "TCP", 00:18:43.887 "adrfam": "IPv4", 00:18:43.887 "traddr": "10.0.0.1", 00:18:43.887 "trsvcid": "54216" 00:18:43.887 }, 00:18:43.887 "auth": { 00:18:43.887 "state": "completed", 00:18:43.887 "digest": "sha384", 00:18:43.887 "dhgroup": "ffdhe3072" 00:18:43.887 } 00:18:43.887 } 00:18:43.887 ]' 00:18:43.887 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.887 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.887 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.146 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:44.146 11:45:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.146 11:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.146 11:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.146 11:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.146 11:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MGQ0MGNiZWNlNjJhNTdlMzY3N2U1NmY4ZTkyNzlmODZjZWZmYTg0NGFjNzAyN2IzU/qdXA==: --dhchap-ctrl-secret DHHC-1:03:ZDU5YWNmOWFjMDUxYTA5Yzk1ZDFlOTllOTExODI3NDg5Y2FmZjE0NDI4MGNlYTZmN2RiMDk3ZjcyM2UwN2Q5YfYdcls=: 00:18:44.713 11:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.713 11:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:44.713 11:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.713 11:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.713 11:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.713 11:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.713 11:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:44.713 11:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:44.970 11:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:44.970 11:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.970 11:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:44.970 11:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:44.970 11:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:44.970 11:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.970 11:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.970 11:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.970 11:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.970 11:45:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.970 11:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.970 11:45:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.235 00:18:45.235 11:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.235 11:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.235 11:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.493 11:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.493 11:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.494 11:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.494 11:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.494 11:45:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.494 11:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.494 { 00:18:45.494 "cntlid": 67, 00:18:45.494 "qid": 0, 00:18:45.494 "state": "enabled", 00:18:45.494 "thread": "nvmf_tgt_poll_group_000", 00:18:45.494 "listen_address": { 00:18:45.494 "trtype": "TCP", 00:18:45.494 "adrfam": "IPv4", 00:18:45.494 "traddr": "10.0.0.2", 00:18:45.494 "trsvcid": "4420" 00:18:45.494 }, 00:18:45.494 "peer_address": { 00:18:45.494 "trtype": "TCP", 00:18:45.494 "adrfam": "IPv4", 00:18:45.494 "traddr": "10.0.0.1", 00:18:45.494 "trsvcid": "54242" 00:18:45.494 }, 00:18:45.494 "auth": { 00:18:45.494 "state": "completed", 00:18:45.494 "digest": "sha384", 00:18:45.494 "dhgroup": "ffdhe3072" 00:18:45.494 } 00:18:45.494 } 00:18:45.494 ]' 00:18:45.494 11:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.494 11:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.494 11:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.494 11:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:45.494 11:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:45.494 11:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.494 11:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.494 11:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.753 11:45:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGQ2NzA4MmNmMDI4ZjU5MTM4ODIyNjc0NTNmZTEzMWEKbvth: --dhchap-ctrl-secret DHHC-1:02:NjBkYzI5NjI2MWUyYWZmZGRiZTI4NThkNzZmZjJkOWQ5MWJmNmQyYzkzMmM1NDdjmVBXQg==: 00:18:46.320 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.320 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:46.320 11:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.320 11:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.320 11:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.320 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.320 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:46.320 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:46.320 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:46.320 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.320 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:46.320 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:46.320 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:46.320 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.320 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.320 11:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.320 11:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.320 11:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.320 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.320 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:46.579 00:18:46.838 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.838 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.838 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.838 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.838 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.838 11:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.838 11:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.838 11:45:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.838 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.838 { 00:18:46.838 "cntlid": 69, 00:18:46.838 "qid": 0, 00:18:46.838 "state": "enabled", 00:18:46.838 "thread": "nvmf_tgt_poll_group_000", 00:18:46.838 "listen_address": { 00:18:46.838 "trtype": "TCP", 00:18:46.838 "adrfam": "IPv4", 00:18:46.838 "traddr": "10.0.0.2", 00:18:46.838 "trsvcid": "4420" 00:18:46.838 }, 00:18:46.838 "peer_address": { 00:18:46.838 "trtype": "TCP", 00:18:46.838 "adrfam": "IPv4", 00:18:46.838 "traddr": "10.0.0.1", 00:18:46.838 "trsvcid": "54256" 00:18:46.838 }, 00:18:46.838 "auth": { 00:18:46.838 "state": "completed", 00:18:46.838 "digest": "sha384", 00:18:46.838 "dhgroup": "ffdhe3072" 00:18:46.838 } 00:18:46.838 } 00:18:46.838 ]' 00:18:46.838 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.838 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.838 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.097 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:47.097 11:45:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.097 11:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.097 11:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.097 11:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.097 11:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTZmOTE1OTAxNDA5ZDlmZmY5YjU3YzI2YzA4MzBjNTM1NGZlZTI1Njg1YjBmYjI42N6jcw==: --dhchap-ctrl-secret DHHC-1:01:NDhmMTM4OWViODJhOGUyNjc2Y2ZkNGI3YmIyYjQxNWSnSqty: 00:18:47.665 11:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.665 11:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:47.665 11:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.665 11:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.665 11:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.665 11:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.665 11:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:47.665 11:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:47.924 11:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:47.924 11:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.924 11:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:47.924 11:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:47.924 11:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:47.924 11:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.924 11:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:47.924 11:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.924 11:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.924 11:45:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.924 11:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:47.924 11:45:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.183 00:18:48.183 11:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.183 11:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.183 11:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.453 11:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.453 11:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.453 11:45:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.453 11:45:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.453 11:45:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.453 11:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.453 { 00:18:48.453 "cntlid": 71, 00:18:48.453 "qid": 0, 00:18:48.453 "state": "enabled", 00:18:48.453 "thread": "nvmf_tgt_poll_group_000", 00:18:48.453 "listen_address": { 00:18:48.453 "trtype": "TCP", 00:18:48.453 "adrfam": "IPv4", 00:18:48.453 "traddr": "10.0.0.2", 00:18:48.453 "trsvcid": "4420" 00:18:48.453 }, 00:18:48.453 "peer_address": { 00:18:48.453 "trtype": "TCP", 00:18:48.453 "adrfam": "IPv4", 00:18:48.453 "traddr": "10.0.0.1", 00:18:48.453 "trsvcid": "54270" 00:18:48.453 }, 00:18:48.453 "auth": { 00:18:48.453 "state": "completed", 00:18:48.453 "digest": "sha384", 00:18:48.453 "dhgroup": "ffdhe3072" 00:18:48.453 } 00:18:48.453 } 00:18:48.453 ]' 00:18:48.453 11:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.453 11:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.453 11:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.453 11:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:48.453 11:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.453 11:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.453 11:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.453 11:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.713 11:45:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MDYwY2I5Y2Q0NTIwOTEwNDJiMGZlMmFlZThmNTNiNmVhMzAyNzUzZWJiN2ZjNjgyZjQ1MmY0MWRiYThmODNjNaXvLAw=: 00:18:49.281 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.281 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:49.281 11:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.281 11:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.281 11:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.281 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.281 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.281 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:49.281 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:49.281 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:18:49.281 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.282 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.282 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:49.282 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:49.282 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.282 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.282 11:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.282 11:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.282 11:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.282 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.282 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:49.540 00:18:49.798 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.798 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.798 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.798 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.798 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.798 11:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.798 11:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.798 11:45:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.798 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.798 { 00:18:49.798 "cntlid": 73, 00:18:49.798 "qid": 0, 00:18:49.798 "state": "enabled", 00:18:49.798 "thread": "nvmf_tgt_poll_group_000", 00:18:49.798 "listen_address": { 00:18:49.798 "trtype": "TCP", 00:18:49.798 "adrfam": "IPv4", 00:18:49.798 "traddr": "10.0.0.2", 00:18:49.798 "trsvcid": "4420" 00:18:49.798 }, 00:18:49.798 "peer_address": { 00:18:49.798 "trtype": "TCP", 00:18:49.798 "adrfam": "IPv4", 00:18:49.798 "traddr": "10.0.0.1", 00:18:49.798 "trsvcid": "54294" 00:18:49.798 }, 00:18:49.798 "auth": { 00:18:49.798 "state": "completed", 00:18:49.798 "digest": "sha384", 00:18:49.798 "dhgroup": "ffdhe4096" 00:18:49.798 } 00:18:49.798 } 00:18:49.798 ]' 00:18:49.798 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.798 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.798 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.057 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:50.057 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.057 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.057 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.057 11:45:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.057 11:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MGQ0MGNiZWNlNjJhNTdlMzY3N2U1NmY4ZTkyNzlmODZjZWZmYTg0NGFjNzAyN2IzU/qdXA==: --dhchap-ctrl-secret DHHC-1:03:ZDU5YWNmOWFjMDUxYTA5Yzk1ZDFlOTllOTExODI3NDg5Y2FmZjE0NDI4MGNlYTZmN2RiMDk3ZjcyM2UwN2Q5YfYdcls=: 00:18:50.624 11:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.625 11:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:50.625 11:45:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.625 11:45:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.625 11:45:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.625 11:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.625 11:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:50.625 11:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:50.883 11:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:50.883 11:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.883 11:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:50.883 11:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:50.883 11:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:50.883 11:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.883 11:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.883 11:45:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.883 11:45:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.883 11:45:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.883 11:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:50.883 11:45:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:51.142 00:18:51.142 11:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.142 11:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.142 11:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.401 11:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.401 11:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.401 11:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.401 11:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.401 11:45:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.401 11:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.401 { 00:18:51.401 "cntlid": 75, 00:18:51.401 "qid": 0, 00:18:51.401 "state": "enabled", 00:18:51.401 "thread": "nvmf_tgt_poll_group_000", 00:18:51.401 "listen_address": { 00:18:51.401 "trtype": "TCP", 00:18:51.401 "adrfam": "IPv4", 00:18:51.401 "traddr": "10.0.0.2", 00:18:51.401 "trsvcid": "4420" 00:18:51.401 }, 00:18:51.401 "peer_address": { 00:18:51.401 "trtype": "TCP", 00:18:51.401 "adrfam": "IPv4", 00:18:51.401 "traddr": "10.0.0.1", 00:18:51.401 "trsvcid": "54306" 00:18:51.401 }, 00:18:51.401 "auth": { 00:18:51.401 "state": "completed", 00:18:51.401 "digest": "sha384", 00:18:51.401 "dhgroup": "ffdhe4096" 00:18:51.401 } 00:18:51.401 } 00:18:51.401 ]' 00:18:51.401 11:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.401 11:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.401 11:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.401 11:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:51.401 11:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.401 11:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.401 11:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.401 11:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.659 11:45:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGQ2NzA4MmNmMDI4ZjU5MTM4ODIyNjc0NTNmZTEzMWEKbvth: --dhchap-ctrl-secret DHHC-1:02:NjBkYzI5NjI2MWUyYWZmZGRiZTI4NThkNzZmZjJkOWQ5MWJmNmQyYzkzMmM1NDdjmVBXQg==: 00:18:52.228 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.228 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:52.228 11:45:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.228 11:45:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.228 11:45:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.228 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.228 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:52.228 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:52.487 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:52.487 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.487 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:52.487 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:52.487 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:52.487 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.487 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.487 11:45:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.487 11:45:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.487 11:45:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.487 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.487 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.746 00:18:52.746 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.746 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.746 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.746 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.746 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.746 11:45:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.746 11:45:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.005 11:45:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.005 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.005 { 00:18:53.005 "cntlid": 77, 00:18:53.005 "qid": 0, 00:18:53.005 "state": "enabled", 00:18:53.005 "thread": "nvmf_tgt_poll_group_000", 00:18:53.005 "listen_address": { 00:18:53.005 "trtype": "TCP", 00:18:53.005 "adrfam": "IPv4", 00:18:53.005 "traddr": "10.0.0.2", 00:18:53.005 "trsvcid": "4420" 00:18:53.005 }, 00:18:53.005 "peer_address": { 00:18:53.005 "trtype": "TCP", 00:18:53.005 "adrfam": "IPv4", 00:18:53.005 "traddr": "10.0.0.1", 00:18:53.005 "trsvcid": "54332" 00:18:53.005 }, 00:18:53.005 "auth": { 00:18:53.005 "state": "completed", 00:18:53.005 "digest": "sha384", 00:18:53.005 "dhgroup": "ffdhe4096" 00:18:53.005 } 00:18:53.005 } 00:18:53.005 ]' 00:18:53.005 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.005 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.005 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.005 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:53.005 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.005 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.005 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.005 11:45:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.263 11:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTZmOTE1OTAxNDA5ZDlmZmY5YjU3YzI2YzA4MzBjNTM1NGZlZTI1Njg1YjBmYjI42N6jcw==: --dhchap-ctrl-secret DHHC-1:01:NDhmMTM4OWViODJhOGUyNjc2Y2ZkNGI3YmIyYjQxNWSnSqty: 00:18:53.829 11:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.829 11:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:53.829 11:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.829 11:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.829 11:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.829 11:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.829 11:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:53.829 11:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:53.829 11:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:53.829 11:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.829 11:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:53.829 11:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:53.829 11:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:53.829 11:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.829 11:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:18:53.830 11:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.830 11:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.830 11:45:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.830 11:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:53.830 11:45:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:54.088 00:18:54.088 11:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.088 11:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.088 11:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.345 11:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.345 11:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.345 11:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.345 11:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.345 11:45:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.345 11:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.345 { 00:18:54.345 "cntlid": 79, 00:18:54.345 "qid": 0, 00:18:54.345 "state": "enabled", 00:18:54.346 "thread": "nvmf_tgt_poll_group_000", 00:18:54.346 "listen_address": { 00:18:54.346 "trtype": "TCP", 00:18:54.346 "adrfam": "IPv4", 00:18:54.346 "traddr": "10.0.0.2", 00:18:54.346 "trsvcid": "4420" 00:18:54.346 }, 00:18:54.346 "peer_address": { 00:18:54.346 "trtype": "TCP", 00:18:54.346 "adrfam": "IPv4", 00:18:54.346 "traddr": "10.0.0.1", 00:18:54.346 "trsvcid": "56006" 00:18:54.346 }, 00:18:54.346 "auth": { 00:18:54.346 "state": "completed", 00:18:54.346 "digest": "sha384", 00:18:54.346 "dhgroup": "ffdhe4096" 00:18:54.346 } 00:18:54.346 } 00:18:54.346 ]' 00:18:54.346 11:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.346 11:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.346 11:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.346 11:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:54.346 11:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.603 11:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.603 11:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.603 11:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.603 11:45:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MDYwY2I5Y2Q0NTIwOTEwNDJiMGZlMmFlZThmNTNiNmVhMzAyNzUzZWJiN2ZjNjgyZjQ1MmY0MWRiYThmODNjNaXvLAw=: 00:18:55.170 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.170 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:55.170 11:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.170 11:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.170 11:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.170 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:55.170 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.170 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:55.170 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:55.429 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:55.429 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.429 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:55.429 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:55.429 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:55.429 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.429 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.429 11:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.429 11:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.429 11:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.429 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.429 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:55.687 00:18:55.687 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.687 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.687 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.946 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.946 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.946 11:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.946 11:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.946 11:45:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.946 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.946 { 00:18:55.946 "cntlid": 81, 00:18:55.946 "qid": 0, 00:18:55.946 "state": "enabled", 00:18:55.946 "thread": "nvmf_tgt_poll_group_000", 00:18:55.946 "listen_address": { 00:18:55.946 "trtype": "TCP", 00:18:55.946 "adrfam": "IPv4", 00:18:55.946 "traddr": "10.0.0.2", 00:18:55.946 "trsvcid": "4420" 00:18:55.946 }, 00:18:55.946 "peer_address": { 00:18:55.946 "trtype": "TCP", 00:18:55.946 "adrfam": "IPv4", 00:18:55.946 "traddr": "10.0.0.1", 00:18:55.946 "trsvcid": "56020" 00:18:55.946 }, 00:18:55.946 "auth": { 00:18:55.946 "state": "completed", 00:18:55.946 "digest": "sha384", 00:18:55.946 "dhgroup": "ffdhe6144" 00:18:55.946 } 00:18:55.946 } 00:18:55.946 ]' 00:18:55.946 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.946 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.946 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.946 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:55.946 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.946 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.946 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.946 11:45:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.205 11:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MGQ0MGNiZWNlNjJhNTdlMzY3N2U1NmY4ZTkyNzlmODZjZWZmYTg0NGFjNzAyN2IzU/qdXA==: --dhchap-ctrl-secret DHHC-1:03:ZDU5YWNmOWFjMDUxYTA5Yzk1ZDFlOTllOTExODI3NDg5Y2FmZjE0NDI4MGNlYTZmN2RiMDk3ZjcyM2UwN2Q5YfYdcls=: 00:18:56.773 11:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.773 11:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:56.773 11:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.773 11:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.773 11:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.773 11:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.773 11:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:56.773 11:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:57.033 11:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:57.033 11:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.033 11:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:57.033 11:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:57.033 11:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:57.033 11:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.033 11:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.033 11:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.033 11:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.033 11:45:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.033 11:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.033 11:45:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:57.292 00:18:57.292 11:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.292 11:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.292 11:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.551 11:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.551 11:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.551 11:45:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.551 11:45:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.551 11:45:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.551 11:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.551 { 00:18:57.551 "cntlid": 83, 00:18:57.551 "qid": 0, 00:18:57.551 "state": "enabled", 00:18:57.551 "thread": "nvmf_tgt_poll_group_000", 00:18:57.551 "listen_address": { 00:18:57.551 "trtype": "TCP", 00:18:57.551 "adrfam": "IPv4", 00:18:57.551 "traddr": "10.0.0.2", 00:18:57.551 "trsvcid": "4420" 00:18:57.551 }, 00:18:57.551 "peer_address": { 00:18:57.551 "trtype": "TCP", 00:18:57.551 "adrfam": "IPv4", 00:18:57.551 "traddr": "10.0.0.1", 00:18:57.551 "trsvcid": "56046" 00:18:57.551 }, 00:18:57.551 "auth": { 00:18:57.551 "state": "completed", 00:18:57.551 "digest": "sha384", 00:18:57.551 "dhgroup": "ffdhe6144" 00:18:57.551 } 00:18:57.551 } 00:18:57.551 ]' 00:18:57.551 11:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.551 11:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.551 11:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.551 11:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:57.551 11:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.551 11:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.551 11:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.551 11:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.812 11:45:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGQ2NzA4MmNmMDI4ZjU5MTM4ODIyNjc0NTNmZTEzMWEKbvth: --dhchap-ctrl-secret DHHC-1:02:NjBkYzI5NjI2MWUyYWZmZGRiZTI4NThkNzZmZjJkOWQ5MWJmNmQyYzkzMmM1NDdjmVBXQg==: 00:18:58.416 11:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.416 11:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:58.416 11:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.416 11:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.416 11:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.416 11:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.416 11:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:58.416 11:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:58.416 11:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:58.416 11:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.416 11:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:58.416 11:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:58.416 11:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:58.416 11:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.416 11:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.416 11:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.416 11:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.416 11:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.416 11:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.416 11:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.983 00:18:58.983 11:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.983 11:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.983 11:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.983 11:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.983 11:45:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.983 11:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.983 11:45:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.983 11:45:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.983 11:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.983 { 00:18:58.983 "cntlid": 85, 00:18:58.983 "qid": 0, 00:18:58.983 "state": "enabled", 00:18:58.983 "thread": "nvmf_tgt_poll_group_000", 00:18:58.983 "listen_address": { 00:18:58.983 "trtype": "TCP", 00:18:58.983 "adrfam": "IPv4", 00:18:58.983 "traddr": "10.0.0.2", 00:18:58.983 "trsvcid": "4420" 00:18:58.983 }, 00:18:58.983 "peer_address": { 00:18:58.983 "trtype": "TCP", 00:18:58.983 "adrfam": "IPv4", 00:18:58.983 "traddr": "10.0.0.1", 00:18:58.983 "trsvcid": "56078" 00:18:58.983 }, 00:18:58.983 "auth": { 00:18:58.983 "state": "completed", 00:18:58.983 "digest": "sha384", 00:18:58.983 "dhgroup": "ffdhe6144" 00:18:58.983 } 00:18:58.983 } 00:18:58.983 ]' 00:18:58.983 11:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.983 11:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.983 11:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.983 11:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:58.983 11:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.241 11:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.241 11:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.241 11:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.241 11:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTZmOTE1OTAxNDA5ZDlmZmY5YjU3YzI2YzA4MzBjNTM1NGZlZTI1Njg1YjBmYjI42N6jcw==: --dhchap-ctrl-secret DHHC-1:01:NDhmMTM4OWViODJhOGUyNjc2Y2ZkNGI3YmIyYjQxNWSnSqty: 00:18:59.807 11:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.807 11:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:18:59.807 11:45:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.807 11:45:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.807 11:45:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.807 11:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.807 11:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:59.807 11:45:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:00.064 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:00.064 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.064 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:00.064 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:00.064 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:00.064 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.064 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:00.064 11:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.064 11:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.064 11:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.064 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.064 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.323 00:19:00.323 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.323 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.323 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.582 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.582 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.582 11:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.582 11:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.582 11:45:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.582 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.582 { 00:19:00.582 "cntlid": 87, 00:19:00.582 "qid": 0, 00:19:00.582 "state": "enabled", 00:19:00.582 "thread": "nvmf_tgt_poll_group_000", 00:19:00.582 "listen_address": { 00:19:00.582 "trtype": "TCP", 00:19:00.582 "adrfam": "IPv4", 00:19:00.582 "traddr": "10.0.0.2", 00:19:00.582 "trsvcid": "4420" 00:19:00.582 }, 00:19:00.582 "peer_address": { 00:19:00.582 "trtype": "TCP", 00:19:00.582 "adrfam": "IPv4", 00:19:00.582 "traddr": "10.0.0.1", 00:19:00.582 "trsvcid": "56124" 00:19:00.582 }, 00:19:00.582 "auth": { 00:19:00.582 "state": "completed", 00:19:00.582 "digest": "sha384", 00:19:00.582 "dhgroup": "ffdhe6144" 00:19:00.582 } 00:19:00.582 } 00:19:00.582 ]' 00:19:00.582 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.582 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:00.582 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.582 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:00.582 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.582 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.582 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.841 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.841 11:45:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MDYwY2I5Y2Q0NTIwOTEwNDJiMGZlMmFlZThmNTNiNmVhMzAyNzUzZWJiN2ZjNjgyZjQ1MmY0MWRiYThmODNjNaXvLAw=: 00:19:01.408 11:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.408 11:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:01.408 11:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.408 11:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.408 11:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.408 11:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:01.408 11:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.408 11:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:01.408 11:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:01.667 11:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:01.667 11:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.667 11:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:01.667 11:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:01.667 11:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:01.667 11:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.667 11:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.667 11:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.667 11:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.667 11:45:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.667 11:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.667 11:45:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.925 00:19:02.183 11:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.183 11:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.183 11:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.183 11:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.183 11:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.183 11:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.183 11:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.184 11:45:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.184 11:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.184 { 00:19:02.184 "cntlid": 89, 00:19:02.184 "qid": 0, 00:19:02.184 "state": "enabled", 00:19:02.184 "thread": "nvmf_tgt_poll_group_000", 00:19:02.184 "listen_address": { 00:19:02.184 "trtype": "TCP", 00:19:02.184 "adrfam": "IPv4", 00:19:02.184 "traddr": "10.0.0.2", 00:19:02.184 "trsvcid": "4420" 00:19:02.184 }, 00:19:02.184 "peer_address": { 00:19:02.184 "trtype": "TCP", 00:19:02.184 "adrfam": "IPv4", 00:19:02.184 "traddr": "10.0.0.1", 00:19:02.184 "trsvcid": "56156" 00:19:02.184 }, 00:19:02.184 "auth": { 00:19:02.184 "state": "completed", 00:19:02.184 "digest": "sha384", 00:19:02.184 "dhgroup": "ffdhe8192" 00:19:02.184 } 00:19:02.184 } 00:19:02.184 ]' 00:19:02.184 11:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.184 11:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:02.184 11:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.442 11:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:02.442 11:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.442 11:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.442 11:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.442 11:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.701 11:45:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MGQ0MGNiZWNlNjJhNTdlMzY3N2U1NmY4ZTkyNzlmODZjZWZmYTg0NGFjNzAyN2IzU/qdXA==: --dhchap-ctrl-secret DHHC-1:03:ZDU5YWNmOWFjMDUxYTA5Yzk1ZDFlOTllOTExODI3NDg5Y2FmZjE0NDI4MGNlYTZmN2RiMDk3ZjcyM2UwN2Q5YfYdcls=: 00:19:03.267 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.267 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:03.267 11:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.267 11:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.267 11:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.267 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.267 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:03.267 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:03.267 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:03.267 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.267 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:03.267 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:03.267 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:03.267 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.267 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.267 11:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.267 11:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.267 11:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.267 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.267 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:03.834 00:19:03.834 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.834 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.834 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.834 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.834 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.834 11:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.834 11:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.092 11:45:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.092 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.092 { 00:19:04.092 "cntlid": 91, 00:19:04.092 "qid": 0, 00:19:04.092 "state": "enabled", 00:19:04.092 "thread": "nvmf_tgt_poll_group_000", 00:19:04.092 "listen_address": { 00:19:04.092 "trtype": "TCP", 00:19:04.092 "adrfam": "IPv4", 00:19:04.092 "traddr": "10.0.0.2", 00:19:04.092 "trsvcid": "4420" 00:19:04.092 }, 00:19:04.092 "peer_address": { 00:19:04.092 "trtype": "TCP", 00:19:04.092 "adrfam": "IPv4", 00:19:04.092 "traddr": "10.0.0.1", 00:19:04.092 "trsvcid": "56186" 00:19:04.092 }, 00:19:04.092 "auth": { 00:19:04.092 "state": "completed", 00:19:04.092 "digest": "sha384", 00:19:04.092 "dhgroup": "ffdhe8192" 00:19:04.092 } 00:19:04.092 } 00:19:04.092 ]' 00:19:04.092 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.092 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.092 11:45:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.092 11:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:04.092 11:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.092 11:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.092 11:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.092 11:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.350 11:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGQ2NzA4MmNmMDI4ZjU5MTM4ODIyNjc0NTNmZTEzMWEKbvth: --dhchap-ctrl-secret DHHC-1:02:NjBkYzI5NjI2MWUyYWZmZGRiZTI4NThkNzZmZjJkOWQ5MWJmNmQyYzkzMmM1NDdjmVBXQg==: 00:19:04.918 11:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.918 11:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:04.918 11:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.918 11:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.918 11:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.918 11:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.918 11:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:04.918 11:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:04.918 11:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:04.918 11:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.918 11:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:04.918 11:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:04.918 11:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:04.918 11:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.918 11:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.918 11:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.918 11:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.918 11:45:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.918 11:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.918 11:45:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:05.484 00:19:05.484 11:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.484 11:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.484 11:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.743 11:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.743 11:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.743 11:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.743 11:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.743 11:45:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.743 11:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.743 { 00:19:05.743 "cntlid": 93, 00:19:05.743 "qid": 0, 00:19:05.743 "state": "enabled", 00:19:05.743 "thread": "nvmf_tgt_poll_group_000", 00:19:05.743 "listen_address": { 00:19:05.743 "trtype": "TCP", 00:19:05.743 "adrfam": "IPv4", 00:19:05.743 "traddr": "10.0.0.2", 00:19:05.743 "trsvcid": "4420" 00:19:05.743 }, 00:19:05.743 "peer_address": { 00:19:05.743 "trtype": "TCP", 00:19:05.743 "adrfam": "IPv4", 00:19:05.743 "traddr": "10.0.0.1", 00:19:05.743 "trsvcid": "35134" 00:19:05.743 }, 00:19:05.743 "auth": { 00:19:05.743 "state": "completed", 00:19:05.743 "digest": "sha384", 00:19:05.743 "dhgroup": "ffdhe8192" 00:19:05.743 } 00:19:05.743 } 00:19:05.743 ]' 00:19:05.743 11:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.743 11:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:05.743 11:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.743 11:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:05.743 11:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.743 11:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.743 11:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.743 11:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.012 11:45:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTZmOTE1OTAxNDA5ZDlmZmY5YjU3YzI2YzA4MzBjNTM1NGZlZTI1Njg1YjBmYjI42N6jcw==: --dhchap-ctrl-secret DHHC-1:01:NDhmMTM4OWViODJhOGUyNjc2Y2ZkNGI3YmIyYjQxNWSnSqty: 00:19:06.579 11:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.579 11:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:06.579 11:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.579 11:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.579 11:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.579 11:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.579 11:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:06.579 11:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:06.579 11:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:06.579 11:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.579 11:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:06.579 11:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:06.579 11:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:06.579 11:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.579 11:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:06.579 11:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.579 11:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.579 11:45:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.579 11:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.579 11:45:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:07.146 00:19:07.146 11:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.146 11:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.146 11:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.405 11:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.405 11:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.405 11:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.405 11:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.405 11:45:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.405 11:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.405 { 00:19:07.405 "cntlid": 95, 00:19:07.405 "qid": 0, 00:19:07.405 "state": "enabled", 00:19:07.405 "thread": "nvmf_tgt_poll_group_000", 00:19:07.405 "listen_address": { 00:19:07.405 "trtype": "TCP", 00:19:07.405 "adrfam": "IPv4", 00:19:07.405 "traddr": "10.0.0.2", 00:19:07.405 "trsvcid": "4420" 00:19:07.405 }, 00:19:07.405 "peer_address": { 00:19:07.405 "trtype": "TCP", 00:19:07.405 "adrfam": "IPv4", 00:19:07.405 "traddr": "10.0.0.1", 00:19:07.405 "trsvcid": "35160" 00:19:07.405 }, 00:19:07.405 "auth": { 00:19:07.405 "state": "completed", 00:19:07.405 "digest": "sha384", 00:19:07.405 "dhgroup": "ffdhe8192" 00:19:07.405 } 00:19:07.405 } 00:19:07.405 ]' 00:19:07.405 11:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.405 11:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:07.405 11:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.405 11:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:07.405 11:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.405 11:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.405 11:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.405 11:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.665 11:45:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MDYwY2I5Y2Q0NTIwOTEwNDJiMGZlMmFlZThmNTNiNmVhMzAyNzUzZWJiN2ZjNjgyZjQ1MmY0MWRiYThmODNjNaXvLAw=: 00:19:08.232 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.232 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:08.232 11:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.232 11:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.232 11:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.232 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:08.232 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:08.232 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.232 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:08.232 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:08.232 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:08.232 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.232 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:08.232 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:08.232 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:08.232 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.232 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.232 11:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.232 11:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.232 11:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.232 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.232 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.491 00:19:08.491 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.491 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.491 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.750 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.750 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.750 11:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.750 11:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.750 11:45:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.750 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.750 { 00:19:08.750 "cntlid": 97, 00:19:08.750 "qid": 0, 00:19:08.750 "state": "enabled", 00:19:08.750 "thread": "nvmf_tgt_poll_group_000", 00:19:08.750 "listen_address": { 00:19:08.750 "trtype": "TCP", 00:19:08.750 "adrfam": "IPv4", 00:19:08.750 "traddr": "10.0.0.2", 00:19:08.750 "trsvcid": "4420" 00:19:08.750 }, 00:19:08.750 "peer_address": { 00:19:08.750 "trtype": "TCP", 00:19:08.750 "adrfam": "IPv4", 00:19:08.750 "traddr": "10.0.0.1", 00:19:08.750 "trsvcid": "35194" 00:19:08.750 }, 00:19:08.750 "auth": { 00:19:08.750 "state": "completed", 00:19:08.750 "digest": "sha512", 00:19:08.750 "dhgroup": "null" 00:19:08.750 } 00:19:08.750 } 00:19:08.750 ]' 00:19:08.750 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.750 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.750 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.750 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:08.750 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.750 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.750 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.750 11:45:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.009 11:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MGQ0MGNiZWNlNjJhNTdlMzY3N2U1NmY4ZTkyNzlmODZjZWZmYTg0NGFjNzAyN2IzU/qdXA==: --dhchap-ctrl-secret DHHC-1:03:ZDU5YWNmOWFjMDUxYTA5Yzk1ZDFlOTllOTExODI3NDg5Y2FmZjE0NDI4MGNlYTZmN2RiMDk3ZjcyM2UwN2Q5YfYdcls=: 00:19:09.577 11:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.577 11:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:09.577 11:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.577 11:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.577 11:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.577 11:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.577 11:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:09.577 11:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:09.836 11:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:09.837 11:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.837 11:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:09.837 11:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:09.837 11:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:09.837 11:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.837 11:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.837 11:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.837 11:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.837 11:45:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.837 11:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.837 11:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.095 00:19:10.095 11:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.095 11:45:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.095 11:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.095 11:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.095 11:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.095 11:45:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.096 11:45:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.096 11:45:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.096 11:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.096 { 00:19:10.096 "cntlid": 99, 00:19:10.096 "qid": 0, 00:19:10.096 "state": "enabled", 00:19:10.096 "thread": "nvmf_tgt_poll_group_000", 00:19:10.096 "listen_address": { 00:19:10.096 "trtype": "TCP", 00:19:10.096 "adrfam": "IPv4", 00:19:10.096 "traddr": "10.0.0.2", 00:19:10.096 "trsvcid": "4420" 00:19:10.096 }, 00:19:10.096 "peer_address": { 00:19:10.096 "trtype": "TCP", 00:19:10.096 "adrfam": "IPv4", 00:19:10.096 "traddr": "10.0.0.1", 00:19:10.096 "trsvcid": "35216" 00:19:10.096 }, 00:19:10.096 "auth": { 00:19:10.096 "state": "completed", 00:19:10.096 "digest": "sha512", 00:19:10.096 "dhgroup": "null" 00:19:10.096 } 00:19:10.096 } 00:19:10.096 ]' 00:19:10.096 11:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.354 11:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.354 11:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.354 11:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:10.354 11:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.354 11:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.354 11:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.354 11:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.612 11:45:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGQ2NzA4MmNmMDI4ZjU5MTM4ODIyNjc0NTNmZTEzMWEKbvth: --dhchap-ctrl-secret DHHC-1:02:NjBkYzI5NjI2MWUyYWZmZGRiZTI4NThkNzZmZjJkOWQ5MWJmNmQyYzkzMmM1NDdjmVBXQg==: 00:19:11.179 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.179 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:11.179 11:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.179 11:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.179 11:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.179 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.179 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:11.179 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:11.179 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:11.179 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.179 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:11.179 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:11.179 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:11.179 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.179 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.179 11:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.179 11:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.179 11:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.179 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.179 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.437 00:19:11.437 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.437 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.437 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.697 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.697 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.697 11:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.697 11:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.697 11:45:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.697 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.697 { 00:19:11.697 "cntlid": 101, 00:19:11.697 "qid": 0, 00:19:11.697 "state": "enabled", 00:19:11.697 "thread": "nvmf_tgt_poll_group_000", 00:19:11.697 "listen_address": { 00:19:11.697 "trtype": "TCP", 00:19:11.697 "adrfam": "IPv4", 00:19:11.697 "traddr": "10.0.0.2", 00:19:11.697 "trsvcid": "4420" 00:19:11.697 }, 00:19:11.697 "peer_address": { 00:19:11.697 "trtype": "TCP", 00:19:11.697 "adrfam": "IPv4", 00:19:11.697 "traddr": "10.0.0.1", 00:19:11.697 "trsvcid": "35238" 00:19:11.697 }, 00:19:11.697 "auth": { 00:19:11.697 "state": "completed", 00:19:11.697 "digest": "sha512", 00:19:11.697 "dhgroup": "null" 00:19:11.697 } 00:19:11.697 } 00:19:11.697 ]' 00:19:11.697 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.697 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.697 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.697 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:11.697 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.697 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.697 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.697 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.956 11:45:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTZmOTE1OTAxNDA5ZDlmZmY5YjU3YzI2YzA4MzBjNTM1NGZlZTI1Njg1YjBmYjI42N6jcw==: --dhchap-ctrl-secret DHHC-1:01:NDhmMTM4OWViODJhOGUyNjc2Y2ZkNGI3YmIyYjQxNWSnSqty: 00:19:12.523 11:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.523 11:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:12.523 11:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.523 11:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.523 11:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.523 11:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.523 11:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:12.523 11:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:12.781 11:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:12.781 11:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.781 11:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.781 11:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:12.781 11:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:12.781 11:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.781 11:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:12.781 11:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.781 11:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.781 11:45:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.781 11:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.781 11:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:13.040 00:19:13.040 11:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.040 11:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.040 11:45:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.040 11:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.040 11:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.040 11:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.040 11:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.040 11:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.040 11:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.040 { 00:19:13.040 "cntlid": 103, 00:19:13.040 "qid": 0, 00:19:13.040 "state": "enabled", 00:19:13.040 "thread": "nvmf_tgt_poll_group_000", 00:19:13.040 "listen_address": { 00:19:13.040 "trtype": "TCP", 00:19:13.040 "adrfam": "IPv4", 00:19:13.040 "traddr": "10.0.0.2", 00:19:13.040 "trsvcid": "4420" 00:19:13.040 }, 00:19:13.040 "peer_address": { 00:19:13.040 "trtype": "TCP", 00:19:13.040 "adrfam": "IPv4", 00:19:13.040 "traddr": "10.0.0.1", 00:19:13.040 "trsvcid": "35256" 00:19:13.040 }, 00:19:13.040 "auth": { 00:19:13.040 "state": "completed", 00:19:13.040 "digest": "sha512", 00:19:13.040 "dhgroup": "null" 00:19:13.040 } 00:19:13.040 } 00:19:13.040 ]' 00:19:13.040 11:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.298 11:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:13.298 11:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.298 11:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:13.298 11:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.298 11:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.298 11:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.298 11:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.555 11:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MDYwY2I5Y2Q0NTIwOTEwNDJiMGZlMmFlZThmNTNiNmVhMzAyNzUzZWJiN2ZjNjgyZjQ1MmY0MWRiYThmODNjNaXvLAw=: 00:19:14.121 11:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.121 11:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:14.121 11:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.121 11:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.121 11:45:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.121 11:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.121 11:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.121 11:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:14.121 11:45:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:14.121 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:14.121 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.121 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:14.121 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:14.121 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:14.121 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.121 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.121 11:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.121 11:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.121 11:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.121 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.121 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.380 00:19:14.380 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.380 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.380 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.639 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.639 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.639 11:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.639 11:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.639 11:45:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.639 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.639 { 00:19:14.639 "cntlid": 105, 00:19:14.639 "qid": 0, 00:19:14.639 "state": "enabled", 00:19:14.639 "thread": "nvmf_tgt_poll_group_000", 00:19:14.639 "listen_address": { 00:19:14.639 "trtype": "TCP", 00:19:14.639 "adrfam": "IPv4", 00:19:14.639 "traddr": "10.0.0.2", 00:19:14.639 "trsvcid": "4420" 00:19:14.639 }, 00:19:14.639 "peer_address": { 00:19:14.639 "trtype": "TCP", 00:19:14.639 "adrfam": "IPv4", 00:19:14.639 "traddr": "10.0.0.1", 00:19:14.639 "trsvcid": "53552" 00:19:14.639 }, 00:19:14.639 "auth": { 00:19:14.639 "state": "completed", 00:19:14.639 "digest": "sha512", 00:19:14.639 "dhgroup": "ffdhe2048" 00:19:14.639 } 00:19:14.639 } 00:19:14.639 ]' 00:19:14.639 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.639 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.639 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.639 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:14.639 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.639 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.639 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.639 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.972 11:45:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MGQ0MGNiZWNlNjJhNTdlMzY3N2U1NmY4ZTkyNzlmODZjZWZmYTg0NGFjNzAyN2IzU/qdXA==: --dhchap-ctrl-secret DHHC-1:03:ZDU5YWNmOWFjMDUxYTA5Yzk1ZDFlOTllOTExODI3NDg5Y2FmZjE0NDI4MGNlYTZmN2RiMDk3ZjcyM2UwN2Q5YfYdcls=: 00:19:15.544 11:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.544 11:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:15.544 11:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.544 11:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.544 11:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.544 11:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.544 11:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:15.544 11:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:15.544 11:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:15.544 11:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.544 11:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:15.544 11:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:15.545 11:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:15.545 11:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.545 11:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.545 11:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.545 11:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.545 11:45:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.545 11:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.545 11:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:15.803 00:19:15.803 11:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.803 11:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.803 11:45:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.061 11:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.061 11:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.061 11:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.061 11:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.061 11:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.061 11:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.061 { 00:19:16.061 "cntlid": 107, 00:19:16.061 "qid": 0, 00:19:16.061 "state": "enabled", 00:19:16.061 "thread": "nvmf_tgt_poll_group_000", 00:19:16.061 "listen_address": { 00:19:16.061 "trtype": "TCP", 00:19:16.061 "adrfam": "IPv4", 00:19:16.061 "traddr": "10.0.0.2", 00:19:16.061 "trsvcid": "4420" 00:19:16.061 }, 00:19:16.061 "peer_address": { 00:19:16.061 "trtype": "TCP", 00:19:16.061 "adrfam": "IPv4", 00:19:16.061 "traddr": "10.0.0.1", 00:19:16.061 "trsvcid": "53574" 00:19:16.061 }, 00:19:16.061 "auth": { 00:19:16.061 "state": "completed", 00:19:16.061 "digest": "sha512", 00:19:16.061 "dhgroup": "ffdhe2048" 00:19:16.061 } 00:19:16.061 } 00:19:16.061 ]' 00:19:16.061 11:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.061 11:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:16.061 11:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.061 11:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:16.061 11:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.319 11:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.319 11:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.319 11:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.319 11:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGQ2NzA4MmNmMDI4ZjU5MTM4ODIyNjc0NTNmZTEzMWEKbvth: --dhchap-ctrl-secret DHHC-1:02:NjBkYzI5NjI2MWUyYWZmZGRiZTI4NThkNzZmZjJkOWQ5MWJmNmQyYzkzMmM1NDdjmVBXQg==: 00:19:16.886 11:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.886 11:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:16.886 11:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.886 11:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.886 11:45:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.886 11:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.886 11:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.886 11:45:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:17.144 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:17.144 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.144 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:17.144 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:17.144 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:17.144 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.144 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.144 11:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.144 11:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.144 11:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.144 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.144 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.403 00:19:17.403 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.403 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.403 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.403 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.403 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.403 11:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.403 11:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.403 11:45:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.403 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.403 { 00:19:17.403 "cntlid": 109, 00:19:17.403 "qid": 0, 00:19:17.403 "state": "enabled", 00:19:17.403 "thread": "nvmf_tgt_poll_group_000", 00:19:17.403 "listen_address": { 00:19:17.403 "trtype": "TCP", 00:19:17.403 "adrfam": "IPv4", 00:19:17.403 "traddr": "10.0.0.2", 00:19:17.403 "trsvcid": "4420" 00:19:17.403 }, 00:19:17.403 "peer_address": { 00:19:17.403 "trtype": "TCP", 00:19:17.403 "adrfam": "IPv4", 00:19:17.403 "traddr": "10.0.0.1", 00:19:17.403 "trsvcid": "53600" 00:19:17.403 }, 00:19:17.403 "auth": { 00:19:17.403 "state": "completed", 00:19:17.403 "digest": "sha512", 00:19:17.403 "dhgroup": "ffdhe2048" 00:19:17.403 } 00:19:17.403 } 00:19:17.403 ]' 00:19:17.403 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.662 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.662 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.662 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:17.662 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.662 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.662 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.662 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.921 11:45:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTZmOTE1OTAxNDA5ZDlmZmY5YjU3YzI2YzA4MzBjNTM1NGZlZTI1Njg1YjBmYjI42N6jcw==: --dhchap-ctrl-secret DHHC-1:01:NDhmMTM4OWViODJhOGUyNjc2Y2ZkNGI3YmIyYjQxNWSnSqty: 00:19:18.489 11:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.489 11:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:18.489 11:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.489 11:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.489 11:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.489 11:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.489 11:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:18.489 11:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:18.489 11:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:18.489 11:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.489 11:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:18.489 11:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:18.489 11:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:18.489 11:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.489 11:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:18.489 11:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.489 11:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.489 11:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.489 11:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.489 11:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:18.747 00:19:18.747 11:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.747 11:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.747 11:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.006 11:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.006 11:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.006 11:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.006 11:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.006 11:45:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.006 11:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.006 { 00:19:19.006 "cntlid": 111, 00:19:19.006 "qid": 0, 00:19:19.006 "state": "enabled", 00:19:19.006 "thread": "nvmf_tgt_poll_group_000", 00:19:19.006 "listen_address": { 00:19:19.006 "trtype": "TCP", 00:19:19.006 "adrfam": "IPv4", 00:19:19.006 "traddr": "10.0.0.2", 00:19:19.006 "trsvcid": "4420" 00:19:19.006 }, 00:19:19.006 "peer_address": { 00:19:19.006 "trtype": "TCP", 00:19:19.006 "adrfam": "IPv4", 00:19:19.006 "traddr": "10.0.0.1", 00:19:19.006 "trsvcid": "53636" 00:19:19.006 }, 00:19:19.006 "auth": { 00:19:19.006 "state": "completed", 00:19:19.006 "digest": "sha512", 00:19:19.006 "dhgroup": "ffdhe2048" 00:19:19.006 } 00:19:19.006 } 00:19:19.006 ]' 00:19:19.006 11:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.006 11:45:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.006 11:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.006 11:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:19.006 11:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.006 11:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.006 11:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.006 11:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.264 11:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MDYwY2I5Y2Q0NTIwOTEwNDJiMGZlMmFlZThmNTNiNmVhMzAyNzUzZWJiN2ZjNjgyZjQ1MmY0MWRiYThmODNjNaXvLAw=: 00:19:19.832 11:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.832 11:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:19.832 11:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.832 11:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.832 11:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.832 11:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:19.832 11:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.832 11:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:19.832 11:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:20.091 11:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:20.091 11:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.091 11:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:20.091 11:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:20.091 11:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:20.091 11:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.091 11:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.091 11:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.091 11:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.091 11:45:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.091 11:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.091 11:45:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.349 00:19:20.349 11:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.349 11:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.349 11:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.349 11:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.349 11:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.349 11:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.349 11:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.349 11:45:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.349 11:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.349 { 00:19:20.349 "cntlid": 113, 00:19:20.349 "qid": 0, 00:19:20.349 "state": "enabled", 00:19:20.349 "thread": "nvmf_tgt_poll_group_000", 00:19:20.349 "listen_address": { 00:19:20.349 "trtype": "TCP", 00:19:20.349 "adrfam": "IPv4", 00:19:20.349 "traddr": "10.0.0.2", 00:19:20.349 "trsvcid": "4420" 00:19:20.349 }, 00:19:20.349 "peer_address": { 00:19:20.349 "trtype": "TCP", 00:19:20.349 "adrfam": "IPv4", 00:19:20.349 "traddr": "10.0.0.1", 00:19:20.349 "trsvcid": "53670" 00:19:20.349 }, 00:19:20.349 "auth": { 00:19:20.349 "state": "completed", 00:19:20.349 "digest": "sha512", 00:19:20.349 "dhgroup": "ffdhe3072" 00:19:20.349 } 00:19:20.349 } 00:19:20.349 ]' 00:19:20.349 11:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.607 11:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:20.607 11:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.607 11:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:20.607 11:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.607 11:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.607 11:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.607 11:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.866 11:45:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MGQ0MGNiZWNlNjJhNTdlMzY3N2U1NmY4ZTkyNzlmODZjZWZmYTg0NGFjNzAyN2IzU/qdXA==: --dhchap-ctrl-secret DHHC-1:03:ZDU5YWNmOWFjMDUxYTA5Yzk1ZDFlOTllOTExODI3NDg5Y2FmZjE0NDI4MGNlYTZmN2RiMDk3ZjcyM2UwN2Q5YfYdcls=: 00:19:21.432 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.432 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:21.432 11:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.432 11:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.432 11:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.432 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.432 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:21.432 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:21.432 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:21.432 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.432 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:21.432 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:21.432 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:21.432 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.432 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.432 11:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.432 11:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.432 11:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.432 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.432 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.690 00:19:21.690 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.690 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.690 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.949 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.949 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.949 11:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.949 11:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.949 11:45:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.949 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.949 { 00:19:21.949 "cntlid": 115, 00:19:21.949 "qid": 0, 00:19:21.949 "state": "enabled", 00:19:21.949 "thread": "nvmf_tgt_poll_group_000", 00:19:21.949 "listen_address": { 00:19:21.949 "trtype": "TCP", 00:19:21.949 "adrfam": "IPv4", 00:19:21.949 "traddr": "10.0.0.2", 00:19:21.949 "trsvcid": "4420" 00:19:21.949 }, 00:19:21.949 "peer_address": { 00:19:21.949 "trtype": "TCP", 00:19:21.949 "adrfam": "IPv4", 00:19:21.949 "traddr": "10.0.0.1", 00:19:21.949 "trsvcid": "53694" 00:19:21.949 }, 00:19:21.949 "auth": { 00:19:21.949 "state": "completed", 00:19:21.949 "digest": "sha512", 00:19:21.949 "dhgroup": "ffdhe3072" 00:19:21.949 } 00:19:21.949 } 00:19:21.949 ]' 00:19:21.949 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.949 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.949 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.949 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:21.949 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.949 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.949 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.949 11:45:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.207 11:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGQ2NzA4MmNmMDI4ZjU5MTM4ODIyNjc0NTNmZTEzMWEKbvth: --dhchap-ctrl-secret DHHC-1:02:NjBkYzI5NjI2MWUyYWZmZGRiZTI4NThkNzZmZjJkOWQ5MWJmNmQyYzkzMmM1NDdjmVBXQg==: 00:19:22.774 11:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.774 11:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:22.774 11:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.774 11:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.774 11:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.774 11:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.774 11:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:22.774 11:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:23.032 11:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:23.032 11:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.032 11:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:23.032 11:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:23.032 11:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:23.032 11:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.032 11:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.032 11:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.032 11:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.032 11:45:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.032 11:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.032 11:45:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:23.032 00:19:23.291 11:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.291 11:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.291 11:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.291 11:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.291 11:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.291 11:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.291 11:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.291 11:45:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.291 11:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.291 { 00:19:23.291 "cntlid": 117, 00:19:23.291 "qid": 0, 00:19:23.291 "state": "enabled", 00:19:23.291 "thread": "nvmf_tgt_poll_group_000", 00:19:23.291 "listen_address": { 00:19:23.291 "trtype": "TCP", 00:19:23.291 "adrfam": "IPv4", 00:19:23.291 "traddr": "10.0.0.2", 00:19:23.291 "trsvcid": "4420" 00:19:23.291 }, 00:19:23.291 "peer_address": { 00:19:23.291 "trtype": "TCP", 00:19:23.291 "adrfam": "IPv4", 00:19:23.291 "traddr": "10.0.0.1", 00:19:23.291 "trsvcid": "53718" 00:19:23.291 }, 00:19:23.291 "auth": { 00:19:23.291 "state": "completed", 00:19:23.291 "digest": "sha512", 00:19:23.291 "dhgroup": "ffdhe3072" 00:19:23.291 } 00:19:23.291 } 00:19:23.291 ]' 00:19:23.291 11:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.291 11:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.291 11:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.549 11:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:23.549 11:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.549 11:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.549 11:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.549 11:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.549 11:45:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTZmOTE1OTAxNDA5ZDlmZmY5YjU3YzI2YzA4MzBjNTM1NGZlZTI1Njg1YjBmYjI42N6jcw==: --dhchap-ctrl-secret DHHC-1:01:NDhmMTM4OWViODJhOGUyNjc2Y2ZkNGI3YmIyYjQxNWSnSqty: 00:19:24.115 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.115 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:24.115 11:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.115 11:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.115 11:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.115 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.115 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:24.115 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:24.373 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:24.373 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.373 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:24.373 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:24.373 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:24.373 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.373 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:24.373 11:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.373 11:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.373 11:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.373 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.373 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:24.632 00:19:24.632 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.632 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.632 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.891 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.891 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.891 11:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.891 11:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.891 11:45:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.891 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.891 { 00:19:24.891 "cntlid": 119, 00:19:24.891 "qid": 0, 00:19:24.891 "state": "enabled", 00:19:24.891 "thread": "nvmf_tgt_poll_group_000", 00:19:24.891 "listen_address": { 00:19:24.891 "trtype": "TCP", 00:19:24.891 "adrfam": "IPv4", 00:19:24.891 "traddr": "10.0.0.2", 00:19:24.891 "trsvcid": "4420" 00:19:24.891 }, 00:19:24.891 "peer_address": { 00:19:24.891 "trtype": "TCP", 00:19:24.891 "adrfam": "IPv4", 00:19:24.891 "traddr": "10.0.0.1", 00:19:24.891 "trsvcid": "33014" 00:19:24.891 }, 00:19:24.891 "auth": { 00:19:24.891 "state": "completed", 00:19:24.891 "digest": "sha512", 00:19:24.891 "dhgroup": "ffdhe3072" 00:19:24.891 } 00:19:24.891 } 00:19:24.891 ]' 00:19:24.891 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.891 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.891 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.891 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:24.891 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.891 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.891 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.891 11:45:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.149 11:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MDYwY2I5Y2Q0NTIwOTEwNDJiMGZlMmFlZThmNTNiNmVhMzAyNzUzZWJiN2ZjNjgyZjQ1MmY0MWRiYThmODNjNaXvLAw=: 00:19:25.715 11:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.715 11:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:25.715 11:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.715 11:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.715 11:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.715 11:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.715 11:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.715 11:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.715 11:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:25.974 11:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:25.974 11:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.974 11:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.974 11:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:25.974 11:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:25.974 11:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.974 11:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.974 11:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.974 11:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.974 11:45:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.974 11:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.974 11:45:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.232 00:19:26.232 11:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.232 11:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.232 11:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.232 11:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.232 11:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.232 11:45:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.232 11:45:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.232 11:45:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.232 11:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.232 { 00:19:26.232 "cntlid": 121, 00:19:26.232 "qid": 0, 00:19:26.232 "state": "enabled", 00:19:26.232 "thread": "nvmf_tgt_poll_group_000", 00:19:26.232 "listen_address": { 00:19:26.232 "trtype": "TCP", 00:19:26.232 "adrfam": "IPv4", 00:19:26.232 "traddr": "10.0.0.2", 00:19:26.232 "trsvcid": "4420" 00:19:26.232 }, 00:19:26.232 "peer_address": { 00:19:26.232 "trtype": "TCP", 00:19:26.232 "adrfam": "IPv4", 00:19:26.232 "traddr": "10.0.0.1", 00:19:26.232 "trsvcid": "33046" 00:19:26.232 }, 00:19:26.232 "auth": { 00:19:26.232 "state": "completed", 00:19:26.232 "digest": "sha512", 00:19:26.232 "dhgroup": "ffdhe4096" 00:19:26.232 } 00:19:26.232 } 00:19:26.232 ]' 00:19:26.232 11:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.490 11:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.491 11:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.491 11:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:26.491 11:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.491 11:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.491 11:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.491 11:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.748 11:45:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MGQ0MGNiZWNlNjJhNTdlMzY3N2U1NmY4ZTkyNzlmODZjZWZmYTg0NGFjNzAyN2IzU/qdXA==: --dhchap-ctrl-secret DHHC-1:03:ZDU5YWNmOWFjMDUxYTA5Yzk1ZDFlOTllOTExODI3NDg5Y2FmZjE0NDI4MGNlYTZmN2RiMDk3ZjcyM2UwN2Q5YfYdcls=: 00:19:27.315 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.315 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:27.315 11:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.315 11:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.315 11:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.315 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.315 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:27.315 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:27.315 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:27.315 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.315 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:27.315 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:27.316 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:27.316 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.316 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.316 11:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.316 11:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.316 11:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.316 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.316 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.574 00:19:27.574 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.574 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.574 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.833 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.833 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.833 11:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.833 11:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.833 11:45:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.833 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.833 { 00:19:27.833 "cntlid": 123, 00:19:27.833 "qid": 0, 00:19:27.833 "state": "enabled", 00:19:27.833 "thread": "nvmf_tgt_poll_group_000", 00:19:27.833 "listen_address": { 00:19:27.833 "trtype": "TCP", 00:19:27.833 "adrfam": "IPv4", 00:19:27.833 "traddr": "10.0.0.2", 00:19:27.833 "trsvcid": "4420" 00:19:27.833 }, 00:19:27.833 "peer_address": { 00:19:27.833 "trtype": "TCP", 00:19:27.833 "adrfam": "IPv4", 00:19:27.833 "traddr": "10.0.0.1", 00:19:27.833 "trsvcid": "33074" 00:19:27.833 }, 00:19:27.833 "auth": { 00:19:27.833 "state": "completed", 00:19:27.833 "digest": "sha512", 00:19:27.833 "dhgroup": "ffdhe4096" 00:19:27.833 } 00:19:27.833 } 00:19:27.833 ]' 00:19:27.833 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.833 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.833 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.833 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:27.833 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.091 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.091 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.091 11:45:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.091 11:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGQ2NzA4MmNmMDI4ZjU5MTM4ODIyNjc0NTNmZTEzMWEKbvth: --dhchap-ctrl-secret DHHC-1:02:NjBkYzI5NjI2MWUyYWZmZGRiZTI4NThkNzZmZjJkOWQ5MWJmNmQyYzkzMmM1NDdjmVBXQg==: 00:19:28.658 11:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.658 11:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:28.658 11:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.658 11:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.658 11:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.658 11:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.658 11:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:28.658 11:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:28.917 11:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:28.917 11:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.917 11:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:28.917 11:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:28.917 11:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:28.917 11:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.917 11:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.917 11:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.917 11:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.917 11:45:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.917 11:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.917 11:45:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.175 00:19:29.175 11:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.175 11:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.175 11:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.434 11:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.434 11:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.434 11:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.434 11:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.434 11:45:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.434 11:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.434 { 00:19:29.434 "cntlid": 125, 00:19:29.434 "qid": 0, 00:19:29.434 "state": "enabled", 00:19:29.434 "thread": "nvmf_tgt_poll_group_000", 00:19:29.434 "listen_address": { 00:19:29.434 "trtype": "TCP", 00:19:29.434 "adrfam": "IPv4", 00:19:29.434 "traddr": "10.0.0.2", 00:19:29.434 "trsvcid": "4420" 00:19:29.434 }, 00:19:29.434 "peer_address": { 00:19:29.434 "trtype": "TCP", 00:19:29.434 "adrfam": "IPv4", 00:19:29.434 "traddr": "10.0.0.1", 00:19:29.434 "trsvcid": "33106" 00:19:29.434 }, 00:19:29.434 "auth": { 00:19:29.434 "state": "completed", 00:19:29.434 "digest": "sha512", 00:19:29.434 "dhgroup": "ffdhe4096" 00:19:29.434 } 00:19:29.434 } 00:19:29.434 ]' 00:19:29.434 11:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.434 11:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.434 11:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.434 11:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:29.434 11:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.434 11:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.434 11:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.434 11:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.691 11:45:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTZmOTE1OTAxNDA5ZDlmZmY5YjU3YzI2YzA4MzBjNTM1NGZlZTI1Njg1YjBmYjI42N6jcw==: --dhchap-ctrl-secret DHHC-1:01:NDhmMTM4OWViODJhOGUyNjc2Y2ZkNGI3YmIyYjQxNWSnSqty: 00:19:30.257 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.257 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.257 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:30.257 11:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.257 11:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.257 11:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.257 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.257 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:30.257 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:30.257 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:30.257 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.257 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:30.257 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:30.257 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:30.257 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.257 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:30.257 11:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.257 11:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.257 11:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.257 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.257 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.514 00:19:30.514 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.514 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.514 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.772 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.772 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.772 11:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.772 11:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.772 11:45:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.772 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.772 { 00:19:30.772 "cntlid": 127, 00:19:30.772 "qid": 0, 00:19:30.772 "state": "enabled", 00:19:30.772 "thread": "nvmf_tgt_poll_group_000", 00:19:30.772 "listen_address": { 00:19:30.772 "trtype": "TCP", 00:19:30.772 "adrfam": "IPv4", 00:19:30.772 "traddr": "10.0.0.2", 00:19:30.772 "trsvcid": "4420" 00:19:30.772 }, 00:19:30.772 "peer_address": { 00:19:30.772 "trtype": "TCP", 00:19:30.772 "adrfam": "IPv4", 00:19:30.772 "traddr": "10.0.0.1", 00:19:30.772 "trsvcid": "33134" 00:19:30.772 }, 00:19:30.772 "auth": { 00:19:30.772 "state": "completed", 00:19:30.772 "digest": "sha512", 00:19:30.772 "dhgroup": "ffdhe4096" 00:19:30.772 } 00:19:30.772 } 00:19:30.772 ]' 00:19:30.772 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.772 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.772 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.030 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:31.030 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.030 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.030 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.030 11:45:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.030 11:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MDYwY2I5Y2Q0NTIwOTEwNDJiMGZlMmFlZThmNTNiNmVhMzAyNzUzZWJiN2ZjNjgyZjQ1MmY0MWRiYThmODNjNaXvLAw=: 00:19:31.627 11:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.627 11:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:31.627 11:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.627 11:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.627 11:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.627 11:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.627 11:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.627 11:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:31.627 11:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:31.885 11:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:31.885 11:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.885 11:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:31.885 11:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:31.885 11:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:31.885 11:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.885 11:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.885 11:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.885 11:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.885 11:45:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.885 11:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.885 11:45:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.142 00:19:32.142 11:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.142 11:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.142 11:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.399 11:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.399 11:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.399 11:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.399 11:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.399 11:46:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.399 11:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.399 { 00:19:32.399 "cntlid": 129, 00:19:32.399 "qid": 0, 00:19:32.399 "state": "enabled", 00:19:32.399 "thread": "nvmf_tgt_poll_group_000", 00:19:32.399 "listen_address": { 00:19:32.399 "trtype": "TCP", 00:19:32.399 "adrfam": "IPv4", 00:19:32.399 "traddr": "10.0.0.2", 00:19:32.399 "trsvcid": "4420" 00:19:32.399 }, 00:19:32.399 "peer_address": { 00:19:32.399 "trtype": "TCP", 00:19:32.399 "adrfam": "IPv4", 00:19:32.399 "traddr": "10.0.0.1", 00:19:32.399 "trsvcid": "33172" 00:19:32.399 }, 00:19:32.399 "auth": { 00:19:32.399 "state": "completed", 00:19:32.399 "digest": "sha512", 00:19:32.399 "dhgroup": "ffdhe6144" 00:19:32.399 } 00:19:32.399 } 00:19:32.399 ]' 00:19:32.399 11:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.399 11:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.399 11:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.399 11:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:32.399 11:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.399 11:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.399 11:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.399 11:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.657 11:46:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MGQ0MGNiZWNlNjJhNTdlMzY3N2U1NmY4ZTkyNzlmODZjZWZmYTg0NGFjNzAyN2IzU/qdXA==: --dhchap-ctrl-secret DHHC-1:03:ZDU5YWNmOWFjMDUxYTA5Yzk1ZDFlOTllOTExODI3NDg5Y2FmZjE0NDI4MGNlYTZmN2RiMDk3ZjcyM2UwN2Q5YfYdcls=: 00:19:33.223 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.223 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.223 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:33.223 11:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.223 11:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.223 11:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.223 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.223 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:33.223 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:33.483 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:33.483 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.483 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:33.483 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:33.483 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:33.483 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.483 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.483 11:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.483 11:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.483 11:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.483 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.483 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.741 00:19:33.741 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.741 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.741 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.000 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.000 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.000 11:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.000 11:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.000 11:46:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.000 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.000 { 00:19:34.000 "cntlid": 131, 00:19:34.000 "qid": 0, 00:19:34.000 "state": "enabled", 00:19:34.000 "thread": "nvmf_tgt_poll_group_000", 00:19:34.000 "listen_address": { 00:19:34.000 "trtype": "TCP", 00:19:34.000 "adrfam": "IPv4", 00:19:34.000 "traddr": "10.0.0.2", 00:19:34.000 "trsvcid": "4420" 00:19:34.000 }, 00:19:34.000 "peer_address": { 00:19:34.000 "trtype": "TCP", 00:19:34.000 "adrfam": "IPv4", 00:19:34.000 "traddr": "10.0.0.1", 00:19:34.000 "trsvcid": "59926" 00:19:34.000 }, 00:19:34.000 "auth": { 00:19:34.000 "state": "completed", 00:19:34.000 "digest": "sha512", 00:19:34.000 "dhgroup": "ffdhe6144" 00:19:34.000 } 00:19:34.000 } 00:19:34.000 ]' 00:19:34.000 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.000 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:34.000 11:46:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.000 11:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:34.000 11:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.000 11:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.000 11:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.000 11:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.259 11:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGQ2NzA4MmNmMDI4ZjU5MTM4ODIyNjc0NTNmZTEzMWEKbvth: --dhchap-ctrl-secret DHHC-1:02:NjBkYzI5NjI2MWUyYWZmZGRiZTI4NThkNzZmZjJkOWQ5MWJmNmQyYzkzMmM1NDdjmVBXQg==: 00:19:34.826 11:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.826 11:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:34.826 11:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.826 11:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.826 11:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.826 11:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.826 11:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:34.827 11:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:35.085 11:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:35.085 11:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.085 11:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:35.085 11:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:35.085 11:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:35.085 11:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.085 11:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.085 11:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.085 11:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.085 11:46:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.085 11:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.085 11:46:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.343 00:19:35.343 11:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.343 11:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.343 11:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.601 11:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.601 11:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.601 11:46:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.601 11:46:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.601 11:46:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.601 11:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.601 { 00:19:35.601 "cntlid": 133, 00:19:35.601 "qid": 0, 00:19:35.601 "state": "enabled", 00:19:35.601 "thread": "nvmf_tgt_poll_group_000", 00:19:35.601 "listen_address": { 00:19:35.601 "trtype": "TCP", 00:19:35.601 "adrfam": "IPv4", 00:19:35.601 "traddr": "10.0.0.2", 00:19:35.601 "trsvcid": "4420" 00:19:35.601 }, 00:19:35.601 "peer_address": { 00:19:35.601 "trtype": "TCP", 00:19:35.601 "adrfam": "IPv4", 00:19:35.601 "traddr": "10.0.0.1", 00:19:35.601 "trsvcid": "59956" 00:19:35.601 }, 00:19:35.601 "auth": { 00:19:35.601 "state": "completed", 00:19:35.601 "digest": "sha512", 00:19:35.601 "dhgroup": "ffdhe6144" 00:19:35.601 } 00:19:35.601 } 00:19:35.601 ]' 00:19:35.601 11:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.601 11:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.601 11:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.601 11:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:35.601 11:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.601 11:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.601 11:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.601 11:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.859 11:46:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTZmOTE1OTAxNDA5ZDlmZmY5YjU3YzI2YzA4MzBjNTM1NGZlZTI1Njg1YjBmYjI42N6jcw==: --dhchap-ctrl-secret DHHC-1:01:NDhmMTM4OWViODJhOGUyNjc2Y2ZkNGI3YmIyYjQxNWSnSqty: 00:19:36.425 11:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.425 11:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:36.425 11:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.425 11:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.425 11:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.425 11:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.426 11:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:36.426 11:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:36.426 11:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:36.426 11:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.426 11:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:36.426 11:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:36.426 11:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:36.426 11:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.426 11:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:36.426 11:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.426 11:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.426 11:46:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.426 11:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.426 11:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.993 00:19:36.993 11:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.993 11:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.993 11:46:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.993 11:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.993 11:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.993 11:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.993 11:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.993 11:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.993 11:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.993 { 00:19:36.993 "cntlid": 135, 00:19:36.993 "qid": 0, 00:19:36.993 "state": "enabled", 00:19:36.993 "thread": "nvmf_tgt_poll_group_000", 00:19:36.993 "listen_address": { 00:19:36.993 "trtype": "TCP", 00:19:36.993 "adrfam": "IPv4", 00:19:36.993 "traddr": "10.0.0.2", 00:19:36.993 "trsvcid": "4420" 00:19:36.993 }, 00:19:36.993 "peer_address": { 00:19:36.993 "trtype": "TCP", 00:19:36.993 "adrfam": "IPv4", 00:19:36.993 "traddr": "10.0.0.1", 00:19:36.993 "trsvcid": "59998" 00:19:36.993 }, 00:19:36.993 "auth": { 00:19:36.993 "state": "completed", 00:19:36.993 "digest": "sha512", 00:19:36.993 "dhgroup": "ffdhe6144" 00:19:36.993 } 00:19:36.993 } 00:19:36.993 ]' 00:19:36.993 11:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.993 11:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.993 11:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.252 11:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:37.252 11:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.252 11:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.252 11:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.252 11:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.252 11:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MDYwY2I5Y2Q0NTIwOTEwNDJiMGZlMmFlZThmNTNiNmVhMzAyNzUzZWJiN2ZjNjgyZjQ1MmY0MWRiYThmODNjNaXvLAw=: 00:19:37.826 11:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.826 11:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:37.826 11:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.826 11:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.826 11:46:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.826 11:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.826 11:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.826 11:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:37.826 11:46:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:38.084 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:38.084 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.084 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:38.084 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:38.084 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:38.084 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.084 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.084 11:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.084 11:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.084 11:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.084 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.084 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.650 00:19:38.650 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.650 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.650 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.650 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.650 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.650 11:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.650 11:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.650 11:46:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.650 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.650 { 00:19:38.650 "cntlid": 137, 00:19:38.650 "qid": 0, 00:19:38.650 "state": "enabled", 00:19:38.650 "thread": "nvmf_tgt_poll_group_000", 00:19:38.650 "listen_address": { 00:19:38.650 "trtype": "TCP", 00:19:38.650 "adrfam": "IPv4", 00:19:38.650 "traddr": "10.0.0.2", 00:19:38.650 "trsvcid": "4420" 00:19:38.650 }, 00:19:38.650 "peer_address": { 00:19:38.650 "trtype": "TCP", 00:19:38.650 "adrfam": "IPv4", 00:19:38.650 "traddr": "10.0.0.1", 00:19:38.650 "trsvcid": "60034" 00:19:38.650 }, 00:19:38.650 "auth": { 00:19:38.650 "state": "completed", 00:19:38.650 "digest": "sha512", 00:19:38.650 "dhgroup": "ffdhe8192" 00:19:38.650 } 00:19:38.650 } 00:19:38.650 ]' 00:19:38.651 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.909 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.909 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.909 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:38.909 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.909 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.909 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.909 11:46:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.166 11:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MGQ0MGNiZWNlNjJhNTdlMzY3N2U1NmY4ZTkyNzlmODZjZWZmYTg0NGFjNzAyN2IzU/qdXA==: --dhchap-ctrl-secret DHHC-1:03:ZDU5YWNmOWFjMDUxYTA5Yzk1ZDFlOTllOTExODI3NDg5Y2FmZjE0NDI4MGNlYTZmN2RiMDk3ZjcyM2UwN2Q5YfYdcls=: 00:19:39.731 11:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.731 11:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:39.731 11:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.731 11:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.731 11:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.731 11:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.731 11:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:39.731 11:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:39.987 11:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:39.987 11:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.987 11:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:39.987 11:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:39.987 11:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:39.987 11:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.987 11:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.987 11:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.987 11:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.987 11:46:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.987 11:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.988 11:46:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.245 00:19:40.245 11:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.245 11:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.245 11:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.503 11:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.503 11:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.503 11:46:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.503 11:46:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.504 11:46:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.504 11:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.504 { 00:19:40.504 "cntlid": 139, 00:19:40.504 "qid": 0, 00:19:40.504 "state": "enabled", 00:19:40.504 "thread": "nvmf_tgt_poll_group_000", 00:19:40.504 "listen_address": { 00:19:40.504 "trtype": "TCP", 00:19:40.504 "adrfam": "IPv4", 00:19:40.504 "traddr": "10.0.0.2", 00:19:40.504 "trsvcid": "4420" 00:19:40.504 }, 00:19:40.504 "peer_address": { 00:19:40.504 "trtype": "TCP", 00:19:40.504 "adrfam": "IPv4", 00:19:40.504 "traddr": "10.0.0.1", 00:19:40.504 "trsvcid": "60050" 00:19:40.504 }, 00:19:40.504 "auth": { 00:19:40.504 "state": "completed", 00:19:40.504 "digest": "sha512", 00:19:40.504 "dhgroup": "ffdhe8192" 00:19:40.504 } 00:19:40.504 } 00:19:40.504 ]' 00:19:40.504 11:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.504 11:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:40.504 11:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.504 11:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:40.504 11:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.761 11:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.761 11:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.761 11:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.761 11:46:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:01:NGQ2NzA4MmNmMDI4ZjU5MTM4ODIyNjc0NTNmZTEzMWEKbvth: --dhchap-ctrl-secret DHHC-1:02:NjBkYzI5NjI2MWUyYWZmZGRiZTI4NThkNzZmZjJkOWQ5MWJmNmQyYzkzMmM1NDdjmVBXQg==: 00:19:41.325 11:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.325 11:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:41.325 11:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.325 11:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.325 11:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.325 11:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.325 11:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:41.325 11:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:41.582 11:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:41.582 11:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.582 11:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:41.582 11:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:41.582 11:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:41.582 11:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.582 11:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.582 11:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.582 11:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.582 11:46:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.582 11:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.582 11:46:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.148 00:19:42.148 11:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.148 11:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.148 11:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.148 11:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.148 11:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.148 11:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.148 11:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.148 11:46:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.148 11:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.148 { 00:19:42.148 "cntlid": 141, 00:19:42.148 "qid": 0, 00:19:42.148 "state": "enabled", 00:19:42.148 "thread": "nvmf_tgt_poll_group_000", 00:19:42.148 "listen_address": { 00:19:42.148 "trtype": "TCP", 00:19:42.148 "adrfam": "IPv4", 00:19:42.148 "traddr": "10.0.0.2", 00:19:42.148 "trsvcid": "4420" 00:19:42.148 }, 00:19:42.148 "peer_address": { 00:19:42.148 "trtype": "TCP", 00:19:42.148 "adrfam": "IPv4", 00:19:42.148 "traddr": "10.0.0.1", 00:19:42.148 "trsvcid": "60076" 00:19:42.148 }, 00:19:42.148 "auth": { 00:19:42.148 "state": "completed", 00:19:42.148 "digest": "sha512", 00:19:42.148 "dhgroup": "ffdhe8192" 00:19:42.148 } 00:19:42.148 } 00:19:42.148 ]' 00:19:42.148 11:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.148 11:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:42.148 11:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.406 11:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:42.406 11:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.406 11:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.406 11:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.406 11:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.406 11:46:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:02:NTZmOTE1OTAxNDA5ZDlmZmY5YjU3YzI2YzA4MzBjNTM1NGZlZTI1Njg1YjBmYjI42N6jcw==: --dhchap-ctrl-secret DHHC-1:01:NDhmMTM4OWViODJhOGUyNjc2Y2ZkNGI3YmIyYjQxNWSnSqty: 00:19:42.971 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.971 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:42.971 11:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.971 11:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.971 11:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.971 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.971 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:42.971 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:43.230 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:43.230 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.230 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:43.230 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:43.230 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:43.230 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.230 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:43.230 11:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.230 11:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.230 11:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.230 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.230 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.796 00:19:43.796 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.796 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.796 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.796 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.796 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.796 11:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.796 11:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.796 11:46:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.796 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.796 { 00:19:43.796 "cntlid": 143, 00:19:43.796 "qid": 0, 00:19:43.796 "state": "enabled", 00:19:43.796 "thread": "nvmf_tgt_poll_group_000", 00:19:43.796 "listen_address": { 00:19:43.796 "trtype": "TCP", 00:19:43.796 "adrfam": "IPv4", 00:19:43.796 "traddr": "10.0.0.2", 00:19:43.796 "trsvcid": "4420" 00:19:43.796 }, 00:19:43.796 "peer_address": { 00:19:43.796 "trtype": "TCP", 00:19:43.796 "adrfam": "IPv4", 00:19:43.796 "traddr": "10.0.0.1", 00:19:43.796 "trsvcid": "60108" 00:19:43.796 }, 00:19:43.796 "auth": { 00:19:43.796 "state": "completed", 00:19:43.796 "digest": "sha512", 00:19:43.796 "dhgroup": "ffdhe8192" 00:19:43.796 } 00:19:43.796 } 00:19:43.796 ]' 00:19:43.796 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.053 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.053 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.053 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:44.053 11:46:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.053 11:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.053 11:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.053 11:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.311 11:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MDYwY2I5Y2Q0NTIwOTEwNDJiMGZlMmFlZThmNTNiNmVhMzAyNzUzZWJiN2ZjNjgyZjQ1MmY0MWRiYThmODNjNaXvLAw=: 00:19:44.877 11:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.877 11:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:44.877 11:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.877 11:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.877 11:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.877 11:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:44.877 11:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:44.877 11:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:44.877 11:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:44.877 11:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:44.877 11:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:44.877 11:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:44.877 11:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.877 11:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:44.877 11:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:44.877 11:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:44.877 11:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.877 11:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.877 11:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.877 11:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.877 11:46:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.877 11:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.878 11:46:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.443 00:19:45.443 11:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.443 11:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.443 11:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.701 11:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.701 11:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.701 11:46:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.701 11:46:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.701 11:46:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.701 11:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.701 { 00:19:45.701 "cntlid": 145, 00:19:45.701 "qid": 0, 00:19:45.701 "state": "enabled", 00:19:45.701 "thread": "nvmf_tgt_poll_group_000", 00:19:45.701 "listen_address": { 00:19:45.701 "trtype": "TCP", 00:19:45.701 "adrfam": "IPv4", 00:19:45.701 "traddr": "10.0.0.2", 00:19:45.701 "trsvcid": "4420" 00:19:45.701 }, 00:19:45.701 "peer_address": { 00:19:45.701 "trtype": "TCP", 00:19:45.701 "adrfam": "IPv4", 00:19:45.701 "traddr": "10.0.0.1", 00:19:45.701 "trsvcid": "38594" 00:19:45.701 }, 00:19:45.701 "auth": { 00:19:45.701 "state": "completed", 00:19:45.701 "digest": "sha512", 00:19:45.701 "dhgroup": "ffdhe8192" 00:19:45.701 } 00:19:45.701 } 00:19:45.701 ]' 00:19:45.701 11:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.702 11:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:45.702 11:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.702 11:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:45.702 11:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.702 11:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.702 11:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.702 11:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.960 11:46:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:00:MGQ0MGNiZWNlNjJhNTdlMzY3N2U1NmY4ZTkyNzlmODZjZWZmYTg0NGFjNzAyN2IzU/qdXA==: --dhchap-ctrl-secret DHHC-1:03:ZDU5YWNmOWFjMDUxYTA5Yzk1ZDFlOTllOTExODI3NDg5Y2FmZjE0NDI4MGNlYTZmN2RiMDk3ZjcyM2UwN2Q5YfYdcls=: 00:19:46.525 11:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.525 11:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:46.525 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.525 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.525 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.525 11:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:19:46.525 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.525 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.525 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.525 11:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:46.525 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:46.525 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:46.525 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:46.525 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.525 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:46.525 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.526 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:46.526 11:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:46.806 request: 00:19:46.806 { 00:19:46.806 "name": "nvme0", 00:19:46.806 "trtype": "tcp", 00:19:46.806 "traddr": "10.0.0.2", 00:19:46.806 "adrfam": "ipv4", 00:19:46.806 "trsvcid": "4420", 00:19:46.806 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:46.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:19:46.806 "prchk_reftag": false, 00:19:46.806 "prchk_guard": false, 00:19:46.806 "hdgst": false, 00:19:46.806 "ddgst": false, 00:19:46.806 "dhchap_key": "key2", 00:19:46.806 "method": "bdev_nvme_attach_controller", 00:19:46.806 "req_id": 1 00:19:46.806 } 00:19:46.806 Got JSON-RPC error response 00:19:46.806 response: 00:19:46.806 { 00:19:46.806 "code": -5, 00:19:46.806 "message": "Input/output error" 00:19:46.806 } 00:19:46.806 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:46.806 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:46.806 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:46.806 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:46.806 11:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:46.806 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.806 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.806 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.806 11:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.806 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.806 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.806 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.806 11:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:46.806 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:46.806 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:46.806 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:46.806 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.806 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:46.806 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:46.807 11:46:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:46.807 11:46:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:47.372 request: 00:19:47.372 { 00:19:47.372 "name": "nvme0", 00:19:47.372 "trtype": "tcp", 00:19:47.372 "traddr": "10.0.0.2", 00:19:47.372 "adrfam": "ipv4", 00:19:47.372 "trsvcid": "4420", 00:19:47.372 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:47.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:19:47.372 "prchk_reftag": false, 00:19:47.372 "prchk_guard": false, 00:19:47.372 "hdgst": false, 00:19:47.372 "ddgst": false, 00:19:47.372 "dhchap_key": "key1", 00:19:47.372 "dhchap_ctrlr_key": "ckey2", 00:19:47.372 "method": "bdev_nvme_attach_controller", 00:19:47.372 "req_id": 1 00:19:47.372 } 00:19:47.372 Got JSON-RPC error response 00:19:47.372 response: 00:19:47.372 { 00:19:47.372 "code": -5, 00:19:47.372 "message": "Input/output error" 00:19:47.372 } 00:19:47.372 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:47.372 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:47.372 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:47.372 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:47.372 11:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:47.372 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.372 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.372 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.372 11:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key1 00:19:47.372 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.372 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.372 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.372 11:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.372 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:47.372 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.372 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:47.372 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.372 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:47.372 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:47.372 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.372 11:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.630 request: 00:19:47.630 { 00:19:47.630 "name": "nvme0", 00:19:47.630 "trtype": "tcp", 00:19:47.630 "traddr": "10.0.0.2", 00:19:47.630 "adrfam": "ipv4", 00:19:47.630 "trsvcid": "4420", 00:19:47.630 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:47.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:19:47.630 "prchk_reftag": false, 00:19:47.630 "prchk_guard": false, 00:19:47.630 "hdgst": false, 00:19:47.630 "ddgst": false, 00:19:47.630 "dhchap_key": "key1", 00:19:47.630 "dhchap_ctrlr_key": "ckey1", 00:19:47.630 "method": "bdev_nvme_attach_controller", 00:19:47.630 "req_id": 1 00:19:47.630 } 00:19:47.630 Got JSON-RPC error response 00:19:47.630 response: 00:19:47.630 { 00:19:47.630 "code": -5, 00:19:47.630 "message": "Input/output error" 00:19:47.630 } 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1967127 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1967127 ']' 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1967127 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1967127 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1967127' 00:19:47.888 killing process with pid 1967127 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1967127 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1967127 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:47.888 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.145 11:46:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1988374 00:19:48.146 11:46:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:48.146 11:46:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1988374 00:19:48.146 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1988374 ']' 00:19:48.146 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.146 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:48.146 11:46:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.146 11:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:48.146 11:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.749 11:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.749 11:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:48.749 11:46:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:48.749 11:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:48.749 11:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.007 11:46:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.007 11:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:49.007 11:46:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1988374 00:19:49.007 11:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1988374 ']' 00:19:49.007 11:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.007 11:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.007 11:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.007 11:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.007 11:46:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.007 11:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.007 11:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:49.007 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:49.007 11:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.007 11:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.266 11:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.266 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:49.266 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.266 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:49.266 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:49.266 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:49.266 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.266 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:49.266 11:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.266 11:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.266 11:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.266 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.266 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:49.894 00:19:49.894 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.894 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.894 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.894 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.894 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.894 11:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.894 11:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.894 11:46:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.894 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.894 { 00:19:49.894 "cntlid": 1, 00:19:49.894 "qid": 0, 00:19:49.894 "state": "enabled", 00:19:49.894 "thread": "nvmf_tgt_poll_group_000", 00:19:49.894 "listen_address": { 00:19:49.894 "trtype": "TCP", 00:19:49.894 "adrfam": "IPv4", 00:19:49.894 "traddr": "10.0.0.2", 00:19:49.894 "trsvcid": "4420" 00:19:49.894 }, 00:19:49.894 "peer_address": { 00:19:49.894 "trtype": "TCP", 00:19:49.894 "adrfam": "IPv4", 00:19:49.894 "traddr": "10.0.0.1", 00:19:49.894 "trsvcid": "38650" 00:19:49.894 }, 00:19:49.894 "auth": { 00:19:49.894 "state": "completed", 00:19:49.894 "digest": "sha512", 00:19:49.894 "dhgroup": "ffdhe8192" 00:19:49.894 } 00:19:49.894 } 00:19:49.894 ]' 00:19:49.894 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.894 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.894 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.894 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:49.894 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.894 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.894 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.894 11:46:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.153 11:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid 006f0d1b-21c0-e711-906e-00163566263e --dhchap-secret DHHC-1:03:MDYwY2I5Y2Q0NTIwOTEwNDJiMGZlMmFlZThmNTNiNmVhMzAyNzUzZWJiN2ZjNjgyZjQ1MmY0MWRiYThmODNjNaXvLAw=: 00:19:50.719 11:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.719 11:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:50.719 11:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.719 11:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.719 11:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.719 11:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --dhchap-key key3 00:19:50.719 11:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.719 11:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.719 11:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.719 11:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:50.719 11:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:50.978 11:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.978 11:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:50.978 11:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.978 11:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:50.978 11:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:50.978 11:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:50.978 11:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:50.978 11:46:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.978 11:46:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.978 request: 00:19:50.978 { 00:19:50.978 "name": "nvme0", 00:19:50.978 "trtype": "tcp", 00:19:50.978 "traddr": "10.0.0.2", 00:19:50.978 "adrfam": "ipv4", 00:19:50.978 "trsvcid": "4420", 00:19:50.978 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:50.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:19:50.978 "prchk_reftag": false, 00:19:50.978 "prchk_guard": false, 00:19:50.978 "hdgst": false, 00:19:50.978 "ddgst": false, 00:19:50.978 "dhchap_key": "key3", 00:19:50.978 "method": "bdev_nvme_attach_controller", 00:19:50.978 "req_id": 1 00:19:50.978 } 00:19:50.978 Got JSON-RPC error response 00:19:50.978 response: 00:19:50.978 { 00:19:50.978 "code": -5, 00:19:50.978 "message": "Input/output error" 00:19:50.978 } 00:19:50.978 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:50.978 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:50.978 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:50.978 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:50.978 11:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:19:50.978 11:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:19:50.978 11:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:50.978 11:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:51.237 11:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.237 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:51.237 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.237 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:51.237 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.237 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:51.237 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.237 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.237 11:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.496 request: 00:19:51.496 { 00:19:51.496 "name": "nvme0", 00:19:51.496 "trtype": "tcp", 00:19:51.496 "traddr": "10.0.0.2", 00:19:51.496 "adrfam": "ipv4", 00:19:51.496 "trsvcid": "4420", 00:19:51.496 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:51.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:19:51.496 "prchk_reftag": false, 00:19:51.496 "prchk_guard": false, 00:19:51.496 "hdgst": false, 00:19:51.496 "ddgst": false, 00:19:51.496 "dhchap_key": "key3", 00:19:51.497 "method": "bdev_nvme_attach_controller", 00:19:51.497 "req_id": 1 00:19:51.497 } 00:19:51.497 Got JSON-RPC error response 00:19:51.497 response: 00:19:51.497 { 00:19:51.497 "code": -5, 00:19:51.497 "message": "Input/output error" 00:19:51.497 } 00:19:51.497 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:51.497 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:51.497 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:51.497 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:51.497 11:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:51.497 11:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:19:51.497 11:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:19:51.497 11:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:51.497 11:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:51.497 11:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:51.497 11:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:51.497 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.497 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.497 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.497 11:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:51.497 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.497 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.497 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.497 11:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:51.497 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:19:51.497 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:51.497 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:19:51.756 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.756 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:19:51.756 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:51.756 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:51.756 11:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:51.756 request: 00:19:51.756 { 00:19:51.756 "name": "nvme0", 00:19:51.756 "trtype": "tcp", 00:19:51.756 "traddr": "10.0.0.2", 00:19:51.756 "adrfam": "ipv4", 00:19:51.756 "trsvcid": "4420", 00:19:51.756 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:51.756 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e", 00:19:51.756 "prchk_reftag": false, 00:19:51.756 "prchk_guard": false, 00:19:51.756 "hdgst": false, 00:19:51.756 "ddgst": false, 00:19:51.756 "dhchap_key": "key0", 00:19:51.756 "dhchap_ctrlr_key": "key1", 00:19:51.756 "method": "bdev_nvme_attach_controller", 00:19:51.756 "req_id": 1 00:19:51.756 } 00:19:51.756 Got JSON-RPC error response 00:19:51.756 response: 00:19:51.756 { 00:19:51.756 "code": -5, 00:19:51.756 "message": "Input/output error" 00:19:51.756 } 00:19:51.756 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:19:51.756 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:51.756 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:51.756 11:46:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:51.756 11:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:51.756 11:46:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:52.013 00:19:52.013 11:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:19:52.013 11:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:19:52.013 11:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.271 11:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.271 11:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.271 11:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.529 11:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:19:52.529 11:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:19:52.529 11:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1967346 00:19:52.529 11:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1967346 ']' 00:19:52.529 11:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1967346 00:19:52.529 11:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:52.529 11:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.529 11:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1967346 00:19:52.529 11:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:52.529 11:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:52.529 11:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1967346' 00:19:52.529 killing process with pid 1967346 00:19:52.529 11:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1967346 00:19:52.529 11:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1967346 00:19:52.788 11:46:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:52.788 11:46:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:52.788 11:46:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:19:52.788 11:46:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:52.788 11:46:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:19:52.788 11:46:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:52.788 11:46:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:52.788 rmmod nvme_tcp 00:19:52.788 rmmod nvme_fabrics 00:19:52.788 rmmod nvme_keyring 00:19:52.788 11:46:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:52.788 11:46:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:19:52.788 11:46:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:19:52.788 11:46:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1988374 ']' 00:19:52.788 11:46:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1988374 00:19:52.788 11:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1988374 ']' 00:19:52.788 11:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1988374 00:19:52.788 11:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:19:52.788 11:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.788 11:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1988374 00:19:52.788 11:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:52.788 11:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:52.788 11:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1988374' 00:19:52.788 killing process with pid 1988374 00:19:52.788 11:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1988374 00:19:52.788 11:46:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1988374 00:19:53.046 11:46:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:53.046 11:46:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:53.046 11:46:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:53.046 11:46:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:53.046 11:46:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:53.046 11:46:21 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.046 11:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:53.046 11:46:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.579 11:46:23 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:55.580 11:46:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.put /tmp/spdk.key-sha256.BUT /tmp/spdk.key-sha384.U0t /tmp/spdk.key-sha512.TVl /tmp/spdk.key-sha512.ayW /tmp/spdk.key-sha384.uq8 /tmp/spdk.key-sha256.XIH '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:55.580 00:19:55.580 real 2m10.248s 00:19:55.580 user 4m48.988s 00:19:55.580 sys 0m29.117s 00:19:55.580 11:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:55.580 11:46:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.580 ************************************ 00:19:55.580 END TEST nvmf_auth_target 00:19:55.580 ************************************ 00:19:55.580 11:46:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:55.580 11:46:23 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:19:55.580 11:46:23 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:55.580 11:46:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:55.580 11:46:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:55.580 11:46:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:55.580 ************************************ 00:19:55.580 START TEST nvmf_bdevio_no_huge 00:19:55.580 ************************************ 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:55.580 * Looking for test storage... 00:19:55.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:19:55.580 11:46:23 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:02.148 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:02.148 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:02.148 Found net devices under 0000:af:00.0: cvl_0_0 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:02.148 Found net devices under 0000:af:00.1: cvl_0_1 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:02.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:20:02.148 00:20:02.148 --- 10.0.0.2 ping statistics --- 00:20:02.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.148 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:02.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:20:02.148 00:20:02.148 --- 10.0.0.1 ping statistics --- 00:20:02.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.148 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:02.148 11:46:29 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:02.148 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:02.148 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:02.148 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:02.148 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.148 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1992881 00:20:02.148 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1992881 00:20:02.148 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:02.148 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1992881 ']' 00:20:02.149 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.149 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.149 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.149 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.149 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.149 [2024-07-15 11:46:30.068769] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:20:02.149 [2024-07-15 11:46:30.068823] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:02.149 [2024-07-15 11:46:30.154812] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:02.407 [2024-07-15 11:46:30.285467] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.407 [2024-07-15 11:46:30.285514] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.407 [2024-07-15 11:46:30.285528] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.407 [2024-07-15 11:46:30.285554] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.407 [2024-07-15 11:46:30.285564] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.407 [2024-07-15 11:46:30.285687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:02.407 [2024-07-15 11:46:30.285797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:02.407 [2024-07-15 11:46:30.285891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:02.407 [2024-07-15 11:46:30.285890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.984 [2024-07-15 11:46:30.929470] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.984 Malloc0 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:02.984 [2024-07-15 11:46:30.966564] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:02.984 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:02.984 { 00:20:02.984 "params": { 00:20:02.984 "name": "Nvme$subsystem", 00:20:02.984 "trtype": "$TEST_TRANSPORT", 00:20:02.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:02.984 "adrfam": "ipv4", 00:20:02.984 "trsvcid": "$NVMF_PORT", 00:20:02.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:02.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:02.985 "hdgst": ${hdgst:-false}, 00:20:02.985 "ddgst": ${ddgst:-false} 00:20:02.985 }, 00:20:02.985 "method": "bdev_nvme_attach_controller" 00:20:02.985 } 00:20:02.985 EOF 00:20:02.985 )") 00:20:02.985 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:02.985 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:02.985 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:02.985 11:46:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:02.985 "params": { 00:20:02.985 "name": "Nvme1", 00:20:02.985 "trtype": "tcp", 00:20:02.985 "traddr": "10.0.0.2", 00:20:02.985 "adrfam": "ipv4", 00:20:02.985 "trsvcid": "4420", 00:20:02.985 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.985 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:02.985 "hdgst": false, 00:20:02.985 "ddgst": false 00:20:02.985 }, 00:20:02.985 "method": "bdev_nvme_attach_controller" 00:20:02.985 }' 00:20:02.985 [2024-07-15 11:46:31.016202] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:20:02.985 [2024-07-15 11:46:31.016251] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1993162 ] 00:20:03.243 [2024-07-15 11:46:31.090618] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:03.243 [2024-07-15 11:46:31.191844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.243 [2024-07-15 11:46:31.191938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.243 [2024-07-15 11:46:31.191938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.502 I/O targets: 00:20:03.502 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:03.502 00:20:03.502 00:20:03.502 CUnit - A unit testing framework for C - Version 2.1-3 00:20:03.502 http://cunit.sourceforge.net/ 00:20:03.502 00:20:03.502 00:20:03.502 Suite: bdevio tests on: Nvme1n1 00:20:03.502 Test: blockdev write read block ...passed 00:20:03.502 Test: blockdev write zeroes read block ...passed 00:20:03.502 Test: blockdev write zeroes read no split ...passed 00:20:03.760 Test: blockdev write zeroes read split ...passed 00:20:03.760 Test: blockdev write zeroes read split partial ...passed 00:20:03.760 Test: blockdev reset ...[2024-07-15 11:46:31.687161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:03.760 [2024-07-15 11:46:31.687224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b01670 (9): Bad file descriptor 00:20:03.760 [2024-07-15 11:46:31.702371] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:03.760 passed 00:20:03.760 Test: blockdev write read 8 blocks ...passed 00:20:03.760 Test: blockdev write read size > 128k ...passed 00:20:03.760 Test: blockdev write read invalid size ...passed 00:20:03.760 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:03.760 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:03.760 Test: blockdev write read max offset ...passed 00:20:03.760 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:03.760 Test: blockdev writev readv 8 blocks ...passed 00:20:03.760 Test: blockdev writev readv 30 x 1block ...passed 00:20:04.018 Test: blockdev writev readv block ...passed 00:20:04.018 Test: blockdev writev readv size > 128k ...passed 00:20:04.018 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:04.018 Test: blockdev comparev and writev ...[2024-07-15 11:46:31.877483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.018 [2024-07-15 11:46:31.877514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:04.018 [2024-07-15 11:46:31.877530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.018 [2024-07-15 11:46:31.877540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:04.018 [2024-07-15 11:46:31.877878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.018 [2024-07-15 11:46:31.877891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:04.018 [2024-07-15 11:46:31.877905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.018 [2024-07-15 11:46:31.877915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:04.018 [2024-07-15 11:46:31.878243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.018 [2024-07-15 11:46:31.878255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:04.018 [2024-07-15 11:46:31.878269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.018 [2024-07-15 11:46:31.878279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:04.018 [2024-07-15 11:46:31.878599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.018 [2024-07-15 11:46:31.878611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:04.018 [2024-07-15 11:46:31.878625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:04.018 [2024-07-15 11:46:31.878635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:04.018 passed 00:20:04.018 Test: blockdev nvme passthru rw ...passed 00:20:04.018 Test: blockdev nvme passthru vendor specific ...[2024-07-15 11:46:31.961285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:04.018 [2024-07-15 11:46:31.961303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:04.018 [2024-07-15 11:46:31.961500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:04.018 [2024-07-15 11:46:31.961512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:04.018 [2024-07-15 11:46:31.961704] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:04.018 [2024-07-15 11:46:31.961716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:04.018 [2024-07-15 11:46:31.961921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:04.018 [2024-07-15 11:46:31.961934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:04.018 passed 00:20:04.018 Test: blockdev nvme admin passthru ...passed 00:20:04.018 Test: blockdev copy ...passed 00:20:04.018 00:20:04.018 Run Summary: Type Total Ran Passed Failed Inactive 00:20:04.018 suites 1 1 n/a 0 0 00:20:04.018 tests 23 23 23 0 0 00:20:04.018 asserts 152 152 152 0 n/a 00:20:04.018 00:20:04.018 Elapsed time = 1.110 seconds 00:20:04.276 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:04.276 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.276 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:04.276 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.276 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:04.276 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:04.276 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:04.276 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:04.276 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:04.276 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:04.276 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:04.276 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:04.276 rmmod nvme_tcp 00:20:04.276 rmmod nvme_fabrics 00:20:04.534 rmmod nvme_keyring 00:20:04.535 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:04.535 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:04.535 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:04.535 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1992881 ']' 00:20:04.535 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1992881 00:20:04.535 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1992881 ']' 00:20:04.535 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1992881 00:20:04.535 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:20:04.535 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:04.535 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1992881 00:20:04.535 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:04.535 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:04.535 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1992881' 00:20:04.535 killing process with pid 1992881 00:20:04.535 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1992881 00:20:04.535 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1992881 00:20:04.793 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:04.793 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:04.793 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:04.793 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:04.793 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:04.793 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.793 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.793 11:46:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.326 11:46:34 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:07.326 00:20:07.326 real 0m11.709s 00:20:07.326 user 0m13.967s 00:20:07.326 sys 0m6.247s 00:20:07.326 11:46:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:07.326 11:46:34 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:07.326 ************************************ 00:20:07.326 END TEST nvmf_bdevio_no_huge 00:20:07.326 ************************************ 00:20:07.326 11:46:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:07.326 11:46:34 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:07.326 11:46:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:07.326 11:46:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:07.326 11:46:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:07.326 ************************************ 00:20:07.326 START TEST nvmf_tls 00:20:07.326 ************************************ 00:20:07.326 11:46:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:07.326 * Looking for test storage... 00:20:07.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:07.326 11:46:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:13.890 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:13.890 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:13.890 Found net devices under 0000:af:00.0: cvl_0_0 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:13.890 Found net devices under 0000:af:00.1: cvl_0_1 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:13.890 11:46:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:14.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:20:14.150 00:20:14.150 --- 10.0.0.2 ping statistics --- 00:20:14.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.150 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:14.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:20:14.150 00:20:14.150 --- 10.0.0.1 ping statistics --- 00:20:14.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.150 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1997101 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1997101 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1997101 ']' 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:14.150 11:46:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.150 [2024-07-15 11:46:42.137383] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:20:14.150 [2024-07-15 11:46:42.137431] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.150 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.150 [2024-07-15 11:46:42.212643] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.409 [2024-07-15 11:46:42.280397] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.409 [2024-07-15 11:46:42.280436] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.409 [2024-07-15 11:46:42.280445] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.409 [2024-07-15 11:46:42.280453] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.409 [2024-07-15 11:46:42.280460] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.409 [2024-07-15 11:46:42.280482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.977 11:46:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:14.977 11:46:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:14.977 11:46:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:14.977 11:46:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:14.977 11:46:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:14.977 11:46:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.977 11:46:42 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:14.977 11:46:42 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:15.236 true 00:20:15.236 11:46:43 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:15.236 11:46:43 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:15.236 11:46:43 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:15.236 11:46:43 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:15.236 11:46:43 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:15.495 11:46:43 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:15.495 11:46:43 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:15.754 11:46:43 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:15.754 11:46:43 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:15.754 11:46:43 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:15.754 11:46:43 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:15.754 11:46:43 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:16.014 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:16.014 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:16.014 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:16.014 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:16.273 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:16.273 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:16.273 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:16.273 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:16.273 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:16.532 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:16.532 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:16.532 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:16.791 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:16.791 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:16.791 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:16.791 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:16.791 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:16.791 11:46:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:16.791 11:46:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:16.791 11:46:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:16.791 11:46:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:16.791 11:46:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:16.791 11:46:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:16.791 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:17.050 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:17.050 11:46:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:17.050 11:46:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:17.050 11:46:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:17.050 11:46:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:17.050 11:46:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:17.050 11:46:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:17.050 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:17.050 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:17.050 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.3bb6sTRJSn 00:20:17.050 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:17.050 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.wMrBJhAve2 00:20:17.050 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:17.050 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:17.050 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.3bb6sTRJSn 00:20:17.050 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.wMrBJhAve2 00:20:17.050 11:46:44 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:17.050 11:46:45 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:17.309 11:46:45 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.3bb6sTRJSn 00:20:17.309 11:46:45 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.3bb6sTRJSn 00:20:17.309 11:46:45 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:17.568 [2024-07-15 11:46:45.508000] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.568 11:46:45 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:17.827 11:46:45 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:17.827 [2024-07-15 11:46:45.828804] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:17.827 [2024-07-15 11:46:45.829034] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.827 11:46:45 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:18.086 malloc0 00:20:18.086 11:46:46 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:18.086 11:46:46 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3bb6sTRJSn 00:20:18.345 [2024-07-15 11:46:46.334372] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:18.345 11:46:46 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.3bb6sTRJSn 00:20:18.345 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.553 Initializing NVMe Controllers 00:20:30.553 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:30.553 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:30.553 Initialization complete. Launching workers. 00:20:30.553 ======================================================== 00:20:30.553 Latency(us) 00:20:30.553 Device Information : IOPS MiB/s Average min max 00:20:30.553 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16408.03 64.09 3900.96 805.65 7187.61 00:20:30.553 ======================================================== 00:20:30.553 Total : 16408.03 64.09 3900.96 805.65 7187.61 00:20:30.553 00:20:30.553 11:46:56 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3bb6sTRJSn 00:20:30.553 11:46:56 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:30.553 11:46:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:30.553 11:46:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:30.553 11:46:56 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3bb6sTRJSn' 00:20:30.553 11:46:56 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:30.553 11:46:56 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1999540 00:20:30.553 11:46:56 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:30.553 11:46:56 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:30.553 11:46:56 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1999540 /var/tmp/bdevperf.sock 00:20:30.553 11:46:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1999540 ']' 00:20:30.553 11:46:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.553 11:46:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:30.553 11:46:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.553 11:46:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:30.553 11:46:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.553 [2024-07-15 11:46:56.495720] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:20:30.553 [2024-07-15 11:46:56.495792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1999540 ] 00:20:30.553 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.553 [2024-07-15 11:46:56.561927] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.553 [2024-07-15 11:46:56.637182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.553 11:46:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:30.553 11:46:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:30.553 11:46:57 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3bb6sTRJSn 00:20:30.553 [2024-07-15 11:46:57.438787] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:30.553 [2024-07-15 11:46:57.438879] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:30.553 TLSTESTn1 00:20:30.553 11:46:57 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:30.553 Running I/O for 10 seconds... 00:20:40.541 00:20:40.541 Latency(us) 00:20:40.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.541 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:40.541 Verification LBA range: start 0x0 length 0x2000 00:20:40.541 TLSTESTn1 : 10.03 4670.62 18.24 0.00 0.00 27352.79 6920.60 62914.56 00:20:40.541 =================================================================================================================== 00:20:40.541 Total : 4670.62 18.24 0.00 0.00 27352.79 6920.60 62914.56 00:20:40.541 0 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1999540 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1999540 ']' 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1999540 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1999540 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1999540' 00:20:40.541 killing process with pid 1999540 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1999540 00:20:40.541 Received shutdown signal, test time was about 10.000000 seconds 00:20:40.541 00:20:40.541 Latency(us) 00:20:40.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.541 =================================================================================================================== 00:20:40.541 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:40.541 [2024-07-15 11:47:07.736613] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1999540 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wMrBJhAve2 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wMrBJhAve2 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wMrBJhAve2 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.wMrBJhAve2' 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2001500 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2001500 /var/tmp/bdevperf.sock 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2001500 ']' 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.541 11:47:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.541 [2024-07-15 11:47:07.967504] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:20:40.541 [2024-07-15 11:47:07.967562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001500 ] 00:20:40.541 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.541 [2024-07-15 11:47:08.034749] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.541 [2024-07-15 11:47:08.110168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.801 11:47:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:40.801 11:47:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:40.801 11:47:08 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wMrBJhAve2 00:20:41.060 [2024-07-15 11:47:08.925043] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:41.060 [2024-07-15 11:47:08.925114] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:41.060 [2024-07-15 11:47:08.933071] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:41.061 [2024-07-15 11:47:08.933412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e85e0 (107): Transport endpoint is not connected 00:20:41.061 [2024-07-15 11:47:08.934405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e85e0 (9): Bad file descriptor 00:20:41.061 [2024-07-15 11:47:08.935406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:41.061 [2024-07-15 11:47:08.935422] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:41.061 [2024-07-15 11:47:08.935433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:41.061 request: 00:20:41.061 { 00:20:41.061 "name": "TLSTEST", 00:20:41.061 "trtype": "tcp", 00:20:41.061 "traddr": "10.0.0.2", 00:20:41.061 "adrfam": "ipv4", 00:20:41.061 "trsvcid": "4420", 00:20:41.061 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.061 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.061 "prchk_reftag": false, 00:20:41.061 "prchk_guard": false, 00:20:41.061 "hdgst": false, 00:20:41.061 "ddgst": false, 00:20:41.061 "psk": "/tmp/tmp.wMrBJhAve2", 00:20:41.061 "method": "bdev_nvme_attach_controller", 00:20:41.061 "req_id": 1 00:20:41.061 } 00:20:41.061 Got JSON-RPC error response 00:20:41.061 response: 00:20:41.061 { 00:20:41.061 "code": -5, 00:20:41.061 "message": "Input/output error" 00:20:41.061 } 00:20:41.061 11:47:08 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2001500 00:20:41.061 11:47:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2001500 ']' 00:20:41.061 11:47:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2001500 00:20:41.061 11:47:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:41.061 11:47:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:41.061 11:47:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2001500 00:20:41.061 11:47:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:41.061 11:47:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:41.061 11:47:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2001500' 00:20:41.061 killing process with pid 2001500 00:20:41.061 11:47:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2001500 00:20:41.061 Received shutdown signal, test time was about 10.000000 seconds 00:20:41.061 00:20:41.061 Latency(us) 00:20:41.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:41.061 =================================================================================================================== 00:20:41.061 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:41.061 [2024-07-15 11:47:09.013131] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:41.061 11:47:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2001500 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3bb6sTRJSn 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3bb6sTRJSn 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.3bb6sTRJSn 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3bb6sTRJSn' 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2001657 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2001657 /var/tmp/bdevperf.sock 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2001657 ']' 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:41.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:41.320 11:47:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.320 [2024-07-15 11:47:09.233190] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:20:41.320 [2024-07-15 11:47:09.233246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001657 ] 00:20:41.320 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.320 [2024-07-15 11:47:09.300224] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.320 [2024-07-15 11:47:09.375003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.967 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.967 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:41.967 11:47:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.3bb6sTRJSn 00:20:42.226 [2024-07-15 11:47:10.185299] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:42.226 [2024-07-15 11:47:10.185377] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:42.226 [2024-07-15 11:47:10.196240] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:42.226 [2024-07-15 11:47:10.196266] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:42.226 [2024-07-15 11:47:10.196293] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:42.226 [2024-07-15 11:47:10.196605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174a5e0 (107): Transport endpoint is not connected 00:20:42.226 [2024-07-15 11:47:10.197598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x174a5e0 (9): Bad file descriptor 00:20:42.226 [2024-07-15 11:47:10.198599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:42.226 [2024-07-15 11:47:10.198611] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:42.226 [2024-07-15 11:47:10.198624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:42.226 request: 00:20:42.226 { 00:20:42.226 "name": "TLSTEST", 00:20:42.226 "trtype": "tcp", 00:20:42.226 "traddr": "10.0.0.2", 00:20:42.226 "adrfam": "ipv4", 00:20:42.226 "trsvcid": "4420", 00:20:42.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:42.226 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:42.226 "prchk_reftag": false, 00:20:42.226 "prchk_guard": false, 00:20:42.226 "hdgst": false, 00:20:42.226 "ddgst": false, 00:20:42.226 "psk": "/tmp/tmp.3bb6sTRJSn", 00:20:42.226 "method": "bdev_nvme_attach_controller", 00:20:42.226 "req_id": 1 00:20:42.226 } 00:20:42.226 Got JSON-RPC error response 00:20:42.226 response: 00:20:42.226 { 00:20:42.226 "code": -5, 00:20:42.226 "message": "Input/output error" 00:20:42.226 } 00:20:42.226 11:47:10 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2001657 00:20:42.226 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2001657 ']' 00:20:42.226 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2001657 00:20:42.226 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:42.226 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:42.226 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2001657 00:20:42.226 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:42.226 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:42.226 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2001657' 00:20:42.226 killing process with pid 2001657 00:20:42.226 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2001657 00:20:42.226 Received shutdown signal, test time was about 10.000000 seconds 00:20:42.226 00:20:42.226 Latency(us) 00:20:42.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:42.226 =================================================================================================================== 00:20:42.226 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:42.226 [2024-07-15 11:47:10.283640] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:42.226 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2001657 00:20:42.486 11:47:10 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:42.486 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:42.486 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:42.486 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:42.486 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:42.486 11:47:10 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3bb6sTRJSn 00:20:42.486 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:42.487 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3bb6sTRJSn 00:20:42.487 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:42.487 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:42.487 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:42.487 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:42.487 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.3bb6sTRJSn 00:20:42.487 11:47:10 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:42.487 11:47:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:42.487 11:47:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:42.487 11:47:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3bb6sTRJSn' 00:20:42.487 11:47:10 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:42.487 11:47:10 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2001933 00:20:42.487 11:47:10 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:42.487 11:47:10 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:42.487 11:47:10 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2001933 /var/tmp/bdevperf.sock 00:20:42.487 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2001933 ']' 00:20:42.487 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.487 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:42.487 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.487 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:42.487 11:47:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.487 [2024-07-15 11:47:10.506277] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:20:42.487 [2024-07-15 11:47:10.506329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001933 ] 00:20:42.487 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.487 [2024-07-15 11:47:10.572238] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.746 [2024-07-15 11:47:10.637653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.313 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:43.313 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:43.313 11:47:11 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3bb6sTRJSn 00:20:43.573 [2024-07-15 11:47:11.464228] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:43.573 [2024-07-15 11:47:11.464300] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:43.573 [2024-07-15 11:47:11.469390] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:43.573 [2024-07-15 11:47:11.469415] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:43.573 [2024-07-15 11:47:11.469442] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:43.573 [2024-07-15 11:47:11.469601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22fc5e0 (107): Transport endpoint is not connected 00:20:43.573 [2024-07-15 11:47:11.470593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22fc5e0 (9): Bad file descriptor 00:20:43.573 [2024-07-15 11:47:11.471594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:43.573 [2024-07-15 11:47:11.471607] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:43.573 [2024-07-15 11:47:11.471618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:43.573 request: 00:20:43.573 { 00:20:43.573 "name": "TLSTEST", 00:20:43.573 "trtype": "tcp", 00:20:43.573 "traddr": "10.0.0.2", 00:20:43.573 "adrfam": "ipv4", 00:20:43.573 "trsvcid": "4420", 00:20:43.573 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:43.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:43.573 "prchk_reftag": false, 00:20:43.573 "prchk_guard": false, 00:20:43.573 "hdgst": false, 00:20:43.573 "ddgst": false, 00:20:43.573 "psk": "/tmp/tmp.3bb6sTRJSn", 00:20:43.573 "method": "bdev_nvme_attach_controller", 00:20:43.573 "req_id": 1 00:20:43.573 } 00:20:43.573 Got JSON-RPC error response 00:20:43.573 response: 00:20:43.573 { 00:20:43.573 "code": -5, 00:20:43.573 "message": "Input/output error" 00:20:43.573 } 00:20:43.573 11:47:11 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2001933 00:20:43.573 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2001933 ']' 00:20:43.573 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2001933 00:20:43.573 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:43.573 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:43.573 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2001933 00:20:43.573 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:43.573 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:43.573 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2001933' 00:20:43.573 killing process with pid 2001933 00:20:43.573 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2001933 00:20:43.573 Received shutdown signal, test time was about 10.000000 seconds 00:20:43.573 00:20:43.573 Latency(us) 00:20:43.573 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.573 =================================================================================================================== 00:20:43.573 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:43.573 [2024-07-15 11:47:11.531164] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:43.573 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2001933 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2002206 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2002206 /var/tmp/bdevperf.sock 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2002206 ']' 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:43.833 11:47:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:43.833 [2024-07-15 11:47:11.751386] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:20:43.833 [2024-07-15 11:47:11.751437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2002206 ] 00:20:43.833 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.833 [2024-07-15 11:47:11.816882] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.833 [2024-07-15 11:47:11.881772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.771 11:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:44.771 11:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:44.771 11:47:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:44.771 [2024-07-15 11:47:12.722561] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:44.771 [2024-07-15 11:47:12.724049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x167fb50 (9): Bad file descriptor 00:20:44.771 [2024-07-15 11:47:12.725047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:44.771 [2024-07-15 11:47:12.725061] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:44.771 [2024-07-15 11:47:12.725073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:44.771 request: 00:20:44.771 { 00:20:44.771 "name": "TLSTEST", 00:20:44.771 "trtype": "tcp", 00:20:44.771 "traddr": "10.0.0.2", 00:20:44.771 "adrfam": "ipv4", 00:20:44.771 "trsvcid": "4420", 00:20:44.771 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.771 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:44.771 "prchk_reftag": false, 00:20:44.771 "prchk_guard": false, 00:20:44.771 "hdgst": false, 00:20:44.771 "ddgst": false, 00:20:44.771 "method": "bdev_nvme_attach_controller", 00:20:44.771 "req_id": 1 00:20:44.771 } 00:20:44.771 Got JSON-RPC error response 00:20:44.771 response: 00:20:44.771 { 00:20:44.771 "code": -5, 00:20:44.771 "message": "Input/output error" 00:20:44.771 } 00:20:44.771 11:47:12 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2002206 00:20:44.771 11:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2002206 ']' 00:20:44.771 11:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2002206 00:20:44.771 11:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:44.771 11:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:44.771 11:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2002206 00:20:44.771 11:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:44.771 11:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:44.771 11:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2002206' 00:20:44.771 killing process with pid 2002206 00:20:44.771 11:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2002206 00:20:44.771 Received shutdown signal, test time was about 10.000000 seconds 00:20:44.771 00:20:44.771 Latency(us) 00:20:44.771 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.771 =================================================================================================================== 00:20:44.771 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:44.771 11:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2002206 00:20:45.030 11:47:12 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:45.030 11:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:20:45.030 11:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:45.030 11:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:45.030 11:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:45.030 11:47:12 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1997101 00:20:45.030 11:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1997101 ']' 00:20:45.030 11:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1997101 00:20:45.030 11:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:45.030 11:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:45.030 11:47:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1997101 00:20:45.030 11:47:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:45.030 11:47:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:45.030 11:47:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1997101' 00:20:45.030 killing process with pid 1997101 00:20:45.030 11:47:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1997101 00:20:45.030 [2024-07-15 11:47:13.027917] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:45.030 11:47:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1997101 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.IV7W06ltnk 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.IV7W06ltnk 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2002484 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2002484 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2002484 ']' 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:45.290 11:47:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.290 [2024-07-15 11:47:13.332513] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:20:45.290 [2024-07-15 11:47:13.332562] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.290 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.549 [2024-07-15 11:47:13.403843] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.549 [2024-07-15 11:47:13.473817] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.549 [2024-07-15 11:47:13.473861] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.549 [2024-07-15 11:47:13.473871] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.549 [2024-07-15 11:47:13.473879] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.549 [2024-07-15 11:47:13.473886] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.549 [2024-07-15 11:47:13.473913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:46.116 11:47:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:46.117 11:47:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:46.117 11:47:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:46.117 11:47:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:46.117 11:47:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:46.117 11:47:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.117 11:47:14 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.IV7W06ltnk 00:20:46.117 11:47:14 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IV7W06ltnk 00:20:46.117 11:47:14 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:46.376 [2024-07-15 11:47:14.323019] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.376 11:47:14 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:46.636 11:47:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:46.636 [2024-07-15 11:47:14.667882] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:46.636 [2024-07-15 11:47:14.668088] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.636 11:47:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:46.895 malloc0 00:20:46.895 11:47:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:47.155 11:47:15 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IV7W06ltnk 00:20:47.155 [2024-07-15 11:47:15.173611] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:47.155 11:47:15 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IV7W06ltnk 00:20:47.155 11:47:15 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:47.155 11:47:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:47.155 11:47:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:47.155 11:47:15 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IV7W06ltnk' 00:20:47.155 11:47:15 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:47.155 11:47:15 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:47.155 11:47:15 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2002780 00:20:47.155 11:47:15 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:47.155 11:47:15 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2002780 /var/tmp/bdevperf.sock 00:20:47.155 11:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2002780 ']' 00:20:47.155 11:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.155 11:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:47.156 11:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.156 11:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:47.156 11:47:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.156 [2024-07-15 11:47:15.219772] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:20:47.156 [2024-07-15 11:47:15.219821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2002780 ] 00:20:47.156 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.415 [2024-07-15 11:47:15.285654] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.415 [2024-07-15 11:47:15.359303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.982 11:47:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:47.982 11:47:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:47.982 11:47:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IV7W06ltnk 00:20:48.241 [2024-07-15 11:47:16.197124] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:48.241 [2024-07-15 11:47:16.197202] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:48.241 TLSTESTn1 00:20:48.241 11:47:16 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:48.500 Running I/O for 10 seconds... 00:20:58.478 00:20:58.478 Latency(us) 00:20:58.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.478 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:58.478 Verification LBA range: start 0x0 length 0x2000 00:20:58.478 TLSTESTn1 : 10.03 4693.72 18.33 0.00 0.00 27218.19 7130.32 57461.96 00:20:58.479 =================================================================================================================== 00:20:58.479 Total : 4693.72 18.33 0.00 0.00 27218.19 7130.32 57461.96 00:20:58.479 0 00:20:58.479 11:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:58.479 11:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2002780 00:20:58.479 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2002780 ']' 00:20:58.479 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2002780 00:20:58.479 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:58.479 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:58.479 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2002780 00:20:58.479 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:58.479 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:58.479 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2002780' 00:20:58.479 killing process with pid 2002780 00:20:58.479 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2002780 00:20:58.479 Received shutdown signal, test time was about 10.000000 seconds 00:20:58.479 00:20:58.479 Latency(us) 00:20:58.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:58.479 =================================================================================================================== 00:20:58.479 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:58.479 [2024-07-15 11:47:26.508115] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:58.479 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2002780 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.IV7W06ltnk 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IV7W06ltnk 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IV7W06ltnk 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.IV7W06ltnk 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.IV7W06ltnk' 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2004641 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2004641 /var/tmp/bdevperf.sock 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2004641 ']' 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:58.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:58.738 11:47:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:58.738 [2024-07-15 11:47:26.747549] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:20:58.738 [2024-07-15 11:47:26.747606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004641 ] 00:20:58.738 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.738 [2024-07-15 11:47:26.813377] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.997 [2024-07-15 11:47:26.888930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:59.565 11:47:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:59.565 11:47:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:59.565 11:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IV7W06ltnk 00:20:59.825 [2024-07-15 11:47:27.703762] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:59.825 [2024-07-15 11:47:27.703818] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:59.825 [2024-07-15 11:47:27.703827] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.IV7W06ltnk 00:20:59.825 request: 00:20:59.825 { 00:20:59.825 "name": "TLSTEST", 00:20:59.825 "trtype": "tcp", 00:20:59.825 "traddr": "10.0.0.2", 00:20:59.825 "adrfam": "ipv4", 00:20:59.825 "trsvcid": "4420", 00:20:59.825 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:59.825 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:59.825 "prchk_reftag": false, 00:20:59.825 "prchk_guard": false, 00:20:59.825 "hdgst": false, 00:20:59.825 "ddgst": false, 00:20:59.825 "psk": "/tmp/tmp.IV7W06ltnk", 00:20:59.825 "method": "bdev_nvme_attach_controller", 00:20:59.825 "req_id": 1 00:20:59.825 } 00:20:59.825 Got JSON-RPC error response 00:20:59.825 response: 00:20:59.825 { 00:20:59.825 "code": -1, 00:20:59.825 "message": "Operation not permitted" 00:20:59.825 } 00:20:59.825 11:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2004641 00:20:59.825 11:47:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2004641 ']' 00:20:59.825 11:47:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2004641 00:20:59.825 11:47:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:59.825 11:47:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:59.825 11:47:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2004641 00:20:59.825 11:47:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:59.825 11:47:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:59.825 11:47:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2004641' 00:20:59.825 killing process with pid 2004641 00:20:59.825 11:47:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2004641 00:20:59.825 Received shutdown signal, test time was about 10.000000 seconds 00:20:59.825 00:20:59.825 Latency(us) 00:20:59.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.825 =================================================================================================================== 00:20:59.825 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:59.825 11:47:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2004641 00:21:00.085 11:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:00.085 11:47:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:00.085 11:47:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:00.085 11:47:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:00.085 11:47:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:00.085 11:47:27 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2002484 00:21:00.085 11:47:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2002484 ']' 00:21:00.085 11:47:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2002484 00:21:00.085 11:47:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:00.085 11:47:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:00.085 11:47:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2002484 00:21:00.085 11:47:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:00.085 11:47:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:00.085 11:47:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2002484' 00:21:00.085 killing process with pid 2002484 00:21:00.085 11:47:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2002484 00:21:00.085 [2024-07-15 11:47:28.003640] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:00.085 11:47:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2002484 00:21:00.085 11:47:28 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:00.085 11:47:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:00.345 11:47:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:00.345 11:47:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.345 11:47:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2004905 00:21:00.345 11:47:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:00.345 11:47:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2004905 00:21:00.345 11:47:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2004905 ']' 00:21:00.345 11:47:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.345 11:47:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:00.345 11:47:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.345 11:47:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:00.345 11:47:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.345 [2024-07-15 11:47:28.248931] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:21:00.345 [2024-07-15 11:47:28.248985] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.345 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.345 [2024-07-15 11:47:28.324280] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.345 [2024-07-15 11:47:28.395805] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.345 [2024-07-15 11:47:28.395850] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.345 [2024-07-15 11:47:28.395860] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.345 [2024-07-15 11:47:28.395868] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.345 [2024-07-15 11:47:28.395875] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.345 [2024-07-15 11:47:28.395900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:01.281 11:47:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:01.281 11:47:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:01.281 11:47:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:01.281 11:47:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:01.281 11:47:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.281 11:47:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.281 11:47:29 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.IV7W06ltnk 00:21:01.281 11:47:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:01.281 11:47:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.IV7W06ltnk 00:21:01.281 11:47:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:21:01.281 11:47:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:01.281 11:47:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:21:01.282 11:47:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:01.282 11:47:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.IV7W06ltnk 00:21:01.282 11:47:29 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IV7W06ltnk 00:21:01.282 11:47:29 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:01.282 [2024-07-15 11:47:29.242341] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.282 11:47:29 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:01.541 11:47:29 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:01.541 [2024-07-15 11:47:29.575187] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:01.541 [2024-07-15 11:47:29.575401] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.541 11:47:29 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:01.800 malloc0 00:21:01.800 11:47:29 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:02.060 11:47:29 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IV7W06ltnk 00:21:02.060 [2024-07-15 11:47:30.096837] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:02.060 [2024-07-15 11:47:30.096872] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:02.060 [2024-07-15 11:47:30.096898] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:02.060 request: 00:21:02.060 { 00:21:02.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:02.060 "host": "nqn.2016-06.io.spdk:host1", 00:21:02.060 "psk": "/tmp/tmp.IV7W06ltnk", 00:21:02.060 "method": "nvmf_subsystem_add_host", 00:21:02.060 "req_id": 1 00:21:02.060 } 00:21:02.060 Got JSON-RPC error response 00:21:02.060 response: 00:21:02.060 { 00:21:02.060 "code": -32603, 00:21:02.060 "message": "Internal error" 00:21:02.060 } 00:21:02.060 11:47:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:02.060 11:47:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:02.060 11:47:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:02.060 11:47:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:02.060 11:47:30 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2004905 00:21:02.060 11:47:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2004905 ']' 00:21:02.060 11:47:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2004905 00:21:02.060 11:47:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:02.060 11:47:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:02.060 11:47:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2004905 00:21:02.352 11:47:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:02.352 11:47:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:02.352 11:47:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2004905' 00:21:02.352 killing process with pid 2004905 00:21:02.352 11:47:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2004905 00:21:02.352 11:47:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2004905 00:21:02.352 11:47:30 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.IV7W06ltnk 00:21:02.352 11:47:30 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:02.352 11:47:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:02.352 11:47:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:02.352 11:47:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.352 11:47:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2005431 00:21:02.352 11:47:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:02.352 11:47:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2005431 00:21:02.352 11:47:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2005431 ']' 00:21:02.352 11:47:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.352 11:47:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:02.352 11:47:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.352 11:47:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:02.352 11:47:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.352 [2024-07-15 11:47:30.431821] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:21:02.352 [2024-07-15 11:47:30.431876] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.611 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.611 [2024-07-15 11:47:30.504796] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.611 [2024-07-15 11:47:30.580247] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.611 [2024-07-15 11:47:30.580287] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.611 [2024-07-15 11:47:30.580297] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.611 [2024-07-15 11:47:30.580305] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.611 [2024-07-15 11:47:30.580313] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.611 [2024-07-15 11:47:30.580334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.179 11:47:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:03.179 11:47:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:03.179 11:47:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:03.179 11:47:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:03.179 11:47:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.179 11:47:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.179 11:47:31 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.IV7W06ltnk 00:21:03.179 11:47:31 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IV7W06ltnk 00:21:03.179 11:47:31 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:03.438 [2024-07-15 11:47:31.411229] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.438 11:47:31 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:03.697 11:47:31 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:03.697 [2024-07-15 11:47:31.768124] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:03.697 [2024-07-15 11:47:31.768329] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.697 11:47:31 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:03.956 malloc0 00:21:03.956 11:47:31 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:04.214 11:47:32 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IV7W06ltnk 00:21:04.215 [2024-07-15 11:47:32.289891] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:04.215 11:47:32 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:04.215 11:47:32 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2005748 00:21:04.215 11:47:32 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:04.215 11:47:32 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2005748 /var/tmp/bdevperf.sock 00:21:04.215 11:47:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2005748 ']' 00:21:04.215 11:47:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:04.215 11:47:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:04.215 11:47:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:04.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:04.215 11:47:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:04.215 11:47:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.474 [2024-07-15 11:47:32.346436] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:21:04.474 [2024-07-15 11:47:32.346486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2005748 ] 00:21:04.474 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.474 [2024-07-15 11:47:32.411372] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.474 [2024-07-15 11:47:32.484782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.042 11:47:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:05.042 11:47:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:05.042 11:47:33 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IV7W06ltnk 00:21:05.302 [2024-07-15 11:47:33.307553] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:05.302 [2024-07-15 11:47:33.307640] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:05.302 TLSTESTn1 00:21:05.302 11:47:33 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:05.561 11:47:33 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:05.561 "subsystems": [ 00:21:05.561 { 00:21:05.561 "subsystem": "keyring", 00:21:05.561 "config": [] 00:21:05.561 }, 00:21:05.561 { 00:21:05.561 "subsystem": "iobuf", 00:21:05.561 "config": [ 00:21:05.561 { 00:21:05.561 "method": "iobuf_set_options", 00:21:05.561 "params": { 00:21:05.561 "small_pool_count": 8192, 00:21:05.561 "large_pool_count": 1024, 00:21:05.561 "small_bufsize": 8192, 00:21:05.561 "large_bufsize": 135168 00:21:05.561 } 00:21:05.561 } 00:21:05.561 ] 00:21:05.561 }, 00:21:05.561 { 00:21:05.561 "subsystem": "sock", 00:21:05.561 "config": [ 00:21:05.561 { 00:21:05.561 "method": "sock_set_default_impl", 00:21:05.561 "params": { 00:21:05.561 "impl_name": "posix" 00:21:05.561 } 00:21:05.561 }, 00:21:05.561 { 00:21:05.561 "method": "sock_impl_set_options", 00:21:05.561 "params": { 00:21:05.561 "impl_name": "ssl", 00:21:05.561 "recv_buf_size": 4096, 00:21:05.561 "send_buf_size": 4096, 00:21:05.561 "enable_recv_pipe": true, 00:21:05.561 "enable_quickack": false, 00:21:05.561 "enable_placement_id": 0, 00:21:05.561 "enable_zerocopy_send_server": true, 00:21:05.561 "enable_zerocopy_send_client": false, 00:21:05.561 "zerocopy_threshold": 0, 00:21:05.561 "tls_version": 0, 00:21:05.561 "enable_ktls": false 00:21:05.561 } 00:21:05.561 }, 00:21:05.561 { 00:21:05.562 "method": "sock_impl_set_options", 00:21:05.562 "params": { 00:21:05.562 "impl_name": "posix", 00:21:05.562 "recv_buf_size": 2097152, 00:21:05.562 "send_buf_size": 2097152, 00:21:05.562 "enable_recv_pipe": true, 00:21:05.562 "enable_quickack": false, 00:21:05.562 "enable_placement_id": 0, 00:21:05.562 "enable_zerocopy_send_server": true, 00:21:05.562 "enable_zerocopy_send_client": false, 00:21:05.562 "zerocopy_threshold": 0, 00:21:05.562 "tls_version": 0, 00:21:05.562 "enable_ktls": false 00:21:05.562 } 00:21:05.562 } 00:21:05.562 ] 00:21:05.562 }, 00:21:05.562 { 00:21:05.562 "subsystem": "vmd", 00:21:05.562 "config": [] 00:21:05.562 }, 00:21:05.562 { 00:21:05.562 "subsystem": "accel", 00:21:05.562 "config": [ 00:21:05.562 { 00:21:05.562 "method": "accel_set_options", 00:21:05.562 "params": { 00:21:05.562 "small_cache_size": 128, 00:21:05.562 "large_cache_size": 16, 00:21:05.562 "task_count": 2048, 00:21:05.562 "sequence_count": 2048, 00:21:05.562 "buf_count": 2048 00:21:05.562 } 00:21:05.562 } 00:21:05.562 ] 00:21:05.562 }, 00:21:05.562 { 00:21:05.562 "subsystem": "bdev", 00:21:05.562 "config": [ 00:21:05.562 { 00:21:05.562 "method": "bdev_set_options", 00:21:05.562 "params": { 00:21:05.562 "bdev_io_pool_size": 65535, 00:21:05.562 "bdev_io_cache_size": 256, 00:21:05.562 "bdev_auto_examine": true, 00:21:05.562 "iobuf_small_cache_size": 128, 00:21:05.562 "iobuf_large_cache_size": 16 00:21:05.562 } 00:21:05.562 }, 00:21:05.562 { 00:21:05.562 "method": "bdev_raid_set_options", 00:21:05.562 "params": { 00:21:05.562 "process_window_size_kb": 1024 00:21:05.562 } 00:21:05.562 }, 00:21:05.562 { 00:21:05.562 "method": "bdev_iscsi_set_options", 00:21:05.562 "params": { 00:21:05.562 "timeout_sec": 30 00:21:05.562 } 00:21:05.562 }, 00:21:05.562 { 00:21:05.562 "method": "bdev_nvme_set_options", 00:21:05.562 "params": { 00:21:05.562 "action_on_timeout": "none", 00:21:05.562 "timeout_us": 0, 00:21:05.562 "timeout_admin_us": 0, 00:21:05.562 "keep_alive_timeout_ms": 10000, 00:21:05.562 "arbitration_burst": 0, 00:21:05.562 "low_priority_weight": 0, 00:21:05.562 "medium_priority_weight": 0, 00:21:05.562 "high_priority_weight": 0, 00:21:05.562 "nvme_adminq_poll_period_us": 10000, 00:21:05.562 "nvme_ioq_poll_period_us": 0, 00:21:05.562 "io_queue_requests": 0, 00:21:05.562 "delay_cmd_submit": true, 00:21:05.562 "transport_retry_count": 4, 00:21:05.562 "bdev_retry_count": 3, 00:21:05.562 "transport_ack_timeout": 0, 00:21:05.562 "ctrlr_loss_timeout_sec": 0, 00:21:05.562 "reconnect_delay_sec": 0, 00:21:05.562 "fast_io_fail_timeout_sec": 0, 00:21:05.562 "disable_auto_failback": false, 00:21:05.562 "generate_uuids": false, 00:21:05.562 "transport_tos": 0, 00:21:05.562 "nvme_error_stat": false, 00:21:05.562 "rdma_srq_size": 0, 00:21:05.562 "io_path_stat": false, 00:21:05.562 "allow_accel_sequence": false, 00:21:05.562 "rdma_max_cq_size": 0, 00:21:05.562 "rdma_cm_event_timeout_ms": 0, 00:21:05.562 "dhchap_digests": [ 00:21:05.562 "sha256", 00:21:05.562 "sha384", 00:21:05.562 "sha512" 00:21:05.562 ], 00:21:05.562 "dhchap_dhgroups": [ 00:21:05.562 "null", 00:21:05.562 "ffdhe2048", 00:21:05.562 "ffdhe3072", 00:21:05.562 "ffdhe4096", 00:21:05.562 "ffdhe6144", 00:21:05.562 "ffdhe8192" 00:21:05.562 ] 00:21:05.562 } 00:21:05.562 }, 00:21:05.562 { 00:21:05.562 "method": "bdev_nvme_set_hotplug", 00:21:05.562 "params": { 00:21:05.562 "period_us": 100000, 00:21:05.562 "enable": false 00:21:05.562 } 00:21:05.562 }, 00:21:05.562 { 00:21:05.562 "method": "bdev_malloc_create", 00:21:05.562 "params": { 00:21:05.562 "name": "malloc0", 00:21:05.562 "num_blocks": 8192, 00:21:05.562 "block_size": 4096, 00:21:05.562 "physical_block_size": 4096, 00:21:05.562 "uuid": "dac0ebb8-a50d-4e38-84aa-e64b3b697b7a", 00:21:05.562 "optimal_io_boundary": 0 00:21:05.562 } 00:21:05.562 }, 00:21:05.562 { 00:21:05.562 "method": "bdev_wait_for_examine" 00:21:05.562 } 00:21:05.562 ] 00:21:05.562 }, 00:21:05.562 { 00:21:05.562 "subsystem": "nbd", 00:21:05.562 "config": [] 00:21:05.562 }, 00:21:05.562 { 00:21:05.562 "subsystem": "scheduler", 00:21:05.562 "config": [ 00:21:05.562 { 00:21:05.562 "method": "framework_set_scheduler", 00:21:05.562 "params": { 00:21:05.562 "name": "static" 00:21:05.562 } 00:21:05.562 } 00:21:05.562 ] 00:21:05.562 }, 00:21:05.562 { 00:21:05.562 "subsystem": "nvmf", 00:21:05.562 "config": [ 00:21:05.562 { 00:21:05.562 "method": "nvmf_set_config", 00:21:05.562 "params": { 00:21:05.562 "discovery_filter": "match_any", 00:21:05.562 "admin_cmd_passthru": { 00:21:05.562 "identify_ctrlr": false 00:21:05.562 } 00:21:05.562 } 00:21:05.562 }, 00:21:05.562 { 00:21:05.562 "method": "nvmf_set_max_subsystems", 00:21:05.562 "params": { 00:21:05.562 "max_subsystems": 1024 00:21:05.562 } 00:21:05.562 }, 00:21:05.562 { 00:21:05.562 "method": "nvmf_set_crdt", 00:21:05.562 "params": { 00:21:05.562 "crdt1": 0, 00:21:05.562 "crdt2": 0, 00:21:05.562 "crdt3": 0 00:21:05.562 } 00:21:05.562 }, 00:21:05.562 { 00:21:05.562 "method": "nvmf_create_transport", 00:21:05.562 "params": { 00:21:05.562 "trtype": "TCP", 00:21:05.562 "max_queue_depth": 128, 00:21:05.562 "max_io_qpairs_per_ctrlr": 127, 00:21:05.562 "in_capsule_data_size": 4096, 00:21:05.562 "max_io_size": 131072, 00:21:05.562 "io_unit_size": 131072, 00:21:05.562 "max_aq_depth": 128, 00:21:05.562 "num_shared_buffers": 511, 00:21:05.562 "buf_cache_size": 4294967295, 00:21:05.562 "dif_insert_or_strip": false, 00:21:05.562 "zcopy": false, 00:21:05.562 "c2h_success": false, 00:21:05.562 "sock_priority": 0, 00:21:05.562 "abort_timeout_sec": 1, 00:21:05.562 "ack_timeout": 0, 00:21:05.562 "data_wr_pool_size": 0 00:21:05.562 } 00:21:05.562 }, 00:21:05.562 { 00:21:05.562 "method": "nvmf_create_subsystem", 00:21:05.562 "params": { 00:21:05.562 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.562 "allow_any_host": false, 00:21:05.562 "serial_number": "SPDK00000000000001", 00:21:05.562 "model_number": "SPDK bdev Controller", 00:21:05.562 "max_namespaces": 10, 00:21:05.562 "min_cntlid": 1, 00:21:05.562 "max_cntlid": 65519, 00:21:05.562 "ana_reporting": false 00:21:05.562 } 00:21:05.562 }, 00:21:05.562 { 00:21:05.562 "method": "nvmf_subsystem_add_host", 00:21:05.562 "params": { 00:21:05.562 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.562 "host": "nqn.2016-06.io.spdk:host1", 00:21:05.562 "psk": "/tmp/tmp.IV7W06ltnk" 00:21:05.562 } 00:21:05.562 }, 00:21:05.562 { 00:21:05.562 "method": "nvmf_subsystem_add_ns", 00:21:05.562 "params": { 00:21:05.562 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.562 "namespace": { 00:21:05.562 "nsid": 1, 00:21:05.562 "bdev_name": "malloc0", 00:21:05.562 "nguid": "DAC0EBB8A50D4E3884AAE64B3B697B7A", 00:21:05.562 "uuid": "dac0ebb8-a50d-4e38-84aa-e64b3b697b7a", 00:21:05.562 "no_auto_visible": false 00:21:05.562 } 00:21:05.562 } 00:21:05.562 }, 00:21:05.562 { 00:21:05.562 "method": "nvmf_subsystem_add_listener", 00:21:05.562 "params": { 00:21:05.562 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.562 "listen_address": { 00:21:05.562 "trtype": "TCP", 00:21:05.562 "adrfam": "IPv4", 00:21:05.562 "traddr": "10.0.0.2", 00:21:05.562 "trsvcid": "4420" 00:21:05.562 }, 00:21:05.562 "secure_channel": true 00:21:05.562 } 00:21:05.562 } 00:21:05.562 ] 00:21:05.562 } 00:21:05.562 ] 00:21:05.562 }' 00:21:05.562 11:47:33 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:05.839 11:47:33 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:05.839 "subsystems": [ 00:21:05.839 { 00:21:05.839 "subsystem": "keyring", 00:21:05.840 "config": [] 00:21:05.840 }, 00:21:05.840 { 00:21:05.840 "subsystem": "iobuf", 00:21:05.840 "config": [ 00:21:05.840 { 00:21:05.840 "method": "iobuf_set_options", 00:21:05.840 "params": { 00:21:05.840 "small_pool_count": 8192, 00:21:05.840 "large_pool_count": 1024, 00:21:05.840 "small_bufsize": 8192, 00:21:05.840 "large_bufsize": 135168 00:21:05.840 } 00:21:05.840 } 00:21:05.840 ] 00:21:05.840 }, 00:21:05.840 { 00:21:05.840 "subsystem": "sock", 00:21:05.840 "config": [ 00:21:05.840 { 00:21:05.840 "method": "sock_set_default_impl", 00:21:05.840 "params": { 00:21:05.840 "impl_name": "posix" 00:21:05.840 } 00:21:05.840 }, 00:21:05.840 { 00:21:05.840 "method": "sock_impl_set_options", 00:21:05.840 "params": { 00:21:05.840 "impl_name": "ssl", 00:21:05.840 "recv_buf_size": 4096, 00:21:05.840 "send_buf_size": 4096, 00:21:05.840 "enable_recv_pipe": true, 00:21:05.840 "enable_quickack": false, 00:21:05.840 "enable_placement_id": 0, 00:21:05.840 "enable_zerocopy_send_server": true, 00:21:05.840 "enable_zerocopy_send_client": false, 00:21:05.840 "zerocopy_threshold": 0, 00:21:05.840 "tls_version": 0, 00:21:05.840 "enable_ktls": false 00:21:05.840 } 00:21:05.840 }, 00:21:05.840 { 00:21:05.840 "method": "sock_impl_set_options", 00:21:05.840 "params": { 00:21:05.840 "impl_name": "posix", 00:21:05.840 "recv_buf_size": 2097152, 00:21:05.840 "send_buf_size": 2097152, 00:21:05.840 "enable_recv_pipe": true, 00:21:05.840 "enable_quickack": false, 00:21:05.840 "enable_placement_id": 0, 00:21:05.840 "enable_zerocopy_send_server": true, 00:21:05.840 "enable_zerocopy_send_client": false, 00:21:05.840 "zerocopy_threshold": 0, 00:21:05.840 "tls_version": 0, 00:21:05.840 "enable_ktls": false 00:21:05.840 } 00:21:05.840 } 00:21:05.840 ] 00:21:05.840 }, 00:21:05.840 { 00:21:05.840 "subsystem": "vmd", 00:21:05.840 "config": [] 00:21:05.840 }, 00:21:05.840 { 00:21:05.840 "subsystem": "accel", 00:21:05.840 "config": [ 00:21:05.840 { 00:21:05.840 "method": "accel_set_options", 00:21:05.840 "params": { 00:21:05.840 "small_cache_size": 128, 00:21:05.840 "large_cache_size": 16, 00:21:05.840 "task_count": 2048, 00:21:05.840 "sequence_count": 2048, 00:21:05.840 "buf_count": 2048 00:21:05.840 } 00:21:05.840 } 00:21:05.840 ] 00:21:05.840 }, 00:21:05.840 { 00:21:05.840 "subsystem": "bdev", 00:21:05.840 "config": [ 00:21:05.840 { 00:21:05.840 "method": "bdev_set_options", 00:21:05.840 "params": { 00:21:05.840 "bdev_io_pool_size": 65535, 00:21:05.840 "bdev_io_cache_size": 256, 00:21:05.840 "bdev_auto_examine": true, 00:21:05.840 "iobuf_small_cache_size": 128, 00:21:05.840 "iobuf_large_cache_size": 16 00:21:05.840 } 00:21:05.840 }, 00:21:05.840 { 00:21:05.840 "method": "bdev_raid_set_options", 00:21:05.840 "params": { 00:21:05.840 "process_window_size_kb": 1024 00:21:05.840 } 00:21:05.840 }, 00:21:05.840 { 00:21:05.840 "method": "bdev_iscsi_set_options", 00:21:05.840 "params": { 00:21:05.840 "timeout_sec": 30 00:21:05.840 } 00:21:05.840 }, 00:21:05.840 { 00:21:05.840 "method": "bdev_nvme_set_options", 00:21:05.840 "params": { 00:21:05.840 "action_on_timeout": "none", 00:21:05.840 "timeout_us": 0, 00:21:05.840 "timeout_admin_us": 0, 00:21:05.840 "keep_alive_timeout_ms": 10000, 00:21:05.840 "arbitration_burst": 0, 00:21:05.840 "low_priority_weight": 0, 00:21:05.840 "medium_priority_weight": 0, 00:21:05.840 "high_priority_weight": 0, 00:21:05.840 "nvme_adminq_poll_period_us": 10000, 00:21:05.840 "nvme_ioq_poll_period_us": 0, 00:21:05.840 "io_queue_requests": 512, 00:21:05.840 "delay_cmd_submit": true, 00:21:05.840 "transport_retry_count": 4, 00:21:05.840 "bdev_retry_count": 3, 00:21:05.840 "transport_ack_timeout": 0, 00:21:05.840 "ctrlr_loss_timeout_sec": 0, 00:21:05.840 "reconnect_delay_sec": 0, 00:21:05.840 "fast_io_fail_timeout_sec": 0, 00:21:05.840 "disable_auto_failback": false, 00:21:05.840 "generate_uuids": false, 00:21:05.840 "transport_tos": 0, 00:21:05.840 "nvme_error_stat": false, 00:21:05.840 "rdma_srq_size": 0, 00:21:05.840 "io_path_stat": false, 00:21:05.840 "allow_accel_sequence": false, 00:21:05.840 "rdma_max_cq_size": 0, 00:21:05.840 "rdma_cm_event_timeout_ms": 0, 00:21:05.840 "dhchap_digests": [ 00:21:05.840 "sha256", 00:21:05.840 "sha384", 00:21:05.840 "sha512" 00:21:05.840 ], 00:21:05.840 "dhchap_dhgroups": [ 00:21:05.840 "null", 00:21:05.840 "ffdhe2048", 00:21:05.840 "ffdhe3072", 00:21:05.840 "ffdhe4096", 00:21:05.840 "ffdhe6144", 00:21:05.840 "ffdhe8192" 00:21:05.841 ] 00:21:05.841 } 00:21:05.841 }, 00:21:05.841 { 00:21:05.841 "method": "bdev_nvme_attach_controller", 00:21:05.841 "params": { 00:21:05.841 "name": "TLSTEST", 00:21:05.841 "trtype": "TCP", 00:21:05.841 "adrfam": "IPv4", 00:21:05.841 "traddr": "10.0.0.2", 00:21:05.841 "trsvcid": "4420", 00:21:05.841 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.841 "prchk_reftag": false, 00:21:05.841 "prchk_guard": false, 00:21:05.841 "ctrlr_loss_timeout_sec": 0, 00:21:05.841 "reconnect_delay_sec": 0, 00:21:05.841 "fast_io_fail_timeout_sec": 0, 00:21:05.841 "psk": "/tmp/tmp.IV7W06ltnk", 00:21:05.841 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:05.841 "hdgst": false, 00:21:05.841 "ddgst": false 00:21:05.841 } 00:21:05.841 }, 00:21:05.841 { 00:21:05.841 "method": "bdev_nvme_set_hotplug", 00:21:05.841 "params": { 00:21:05.841 "period_us": 100000, 00:21:05.841 "enable": false 00:21:05.841 } 00:21:05.841 }, 00:21:05.841 { 00:21:05.841 "method": "bdev_wait_for_examine" 00:21:05.841 } 00:21:05.841 ] 00:21:05.841 }, 00:21:05.841 { 00:21:05.841 "subsystem": "nbd", 00:21:05.841 "config": [] 00:21:05.841 } 00:21:05.841 ] 00:21:05.841 }' 00:21:05.841 11:47:33 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2005748 00:21:05.841 11:47:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2005748 ']' 00:21:05.841 11:47:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2005748 00:21:05.841 11:47:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:05.841 11:47:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:05.841 11:47:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2005748 00:21:06.100 11:47:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:06.100 11:47:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:06.100 11:47:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2005748' 00:21:06.100 killing process with pid 2005748 00:21:06.100 11:47:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2005748 00:21:06.100 Received shutdown signal, test time was about 10.000000 seconds 00:21:06.100 00:21:06.100 Latency(us) 00:21:06.100 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:06.100 =================================================================================================================== 00:21:06.100 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:06.100 [2024-07-15 11:47:33.961426] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:06.101 11:47:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2005748 00:21:06.101 11:47:34 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2005431 00:21:06.101 11:47:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2005431 ']' 00:21:06.101 11:47:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2005431 00:21:06.101 11:47:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:06.101 11:47:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:06.101 11:47:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2005431 00:21:06.101 11:47:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:06.101 11:47:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:06.101 11:47:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2005431' 00:21:06.101 killing process with pid 2005431 00:21:06.101 11:47:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2005431 00:21:06.101 [2024-07-15 11:47:34.194343] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:06.101 11:47:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2005431 00:21:06.360 11:47:34 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:06.360 11:47:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:06.360 11:47:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:06.360 11:47:34 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:06.360 "subsystems": [ 00:21:06.360 { 00:21:06.360 "subsystem": "keyring", 00:21:06.360 "config": [] 00:21:06.360 }, 00:21:06.360 { 00:21:06.360 "subsystem": "iobuf", 00:21:06.360 "config": [ 00:21:06.360 { 00:21:06.360 "method": "iobuf_set_options", 00:21:06.360 "params": { 00:21:06.360 "small_pool_count": 8192, 00:21:06.360 "large_pool_count": 1024, 00:21:06.360 "small_bufsize": 8192, 00:21:06.360 "large_bufsize": 135168 00:21:06.360 } 00:21:06.360 } 00:21:06.360 ] 00:21:06.360 }, 00:21:06.360 { 00:21:06.360 "subsystem": "sock", 00:21:06.360 "config": [ 00:21:06.360 { 00:21:06.360 "method": "sock_set_default_impl", 00:21:06.360 "params": { 00:21:06.360 "impl_name": "posix" 00:21:06.360 } 00:21:06.360 }, 00:21:06.360 { 00:21:06.360 "method": "sock_impl_set_options", 00:21:06.360 "params": { 00:21:06.360 "impl_name": "ssl", 00:21:06.360 "recv_buf_size": 4096, 00:21:06.360 "send_buf_size": 4096, 00:21:06.360 "enable_recv_pipe": true, 00:21:06.360 "enable_quickack": false, 00:21:06.360 "enable_placement_id": 0, 00:21:06.360 "enable_zerocopy_send_server": true, 00:21:06.360 "enable_zerocopy_send_client": false, 00:21:06.360 "zerocopy_threshold": 0, 00:21:06.360 "tls_version": 0, 00:21:06.360 "enable_ktls": false 00:21:06.360 } 00:21:06.360 }, 00:21:06.360 { 00:21:06.360 "method": "sock_impl_set_options", 00:21:06.360 "params": { 00:21:06.360 "impl_name": "posix", 00:21:06.360 "recv_buf_size": 2097152, 00:21:06.360 "send_buf_size": 2097152, 00:21:06.360 "enable_recv_pipe": true, 00:21:06.360 "enable_quickack": false, 00:21:06.360 "enable_placement_id": 0, 00:21:06.360 "enable_zerocopy_send_server": true, 00:21:06.360 "enable_zerocopy_send_client": false, 00:21:06.360 "zerocopy_threshold": 0, 00:21:06.360 "tls_version": 0, 00:21:06.360 "enable_ktls": false 00:21:06.360 } 00:21:06.360 } 00:21:06.360 ] 00:21:06.360 }, 00:21:06.360 { 00:21:06.360 "subsystem": "vmd", 00:21:06.360 "config": [] 00:21:06.360 }, 00:21:06.360 { 00:21:06.360 "subsystem": "accel", 00:21:06.360 "config": [ 00:21:06.360 { 00:21:06.360 "method": "accel_set_options", 00:21:06.360 "params": { 00:21:06.360 "small_cache_size": 128, 00:21:06.360 "large_cache_size": 16, 00:21:06.360 "task_count": 2048, 00:21:06.360 "sequence_count": 2048, 00:21:06.360 "buf_count": 2048 00:21:06.360 } 00:21:06.360 } 00:21:06.360 ] 00:21:06.360 }, 00:21:06.360 { 00:21:06.360 "subsystem": "bdev", 00:21:06.360 "config": [ 00:21:06.360 { 00:21:06.360 "method": "bdev_set_options", 00:21:06.360 "params": { 00:21:06.360 "bdev_io_pool_size": 65535, 00:21:06.360 "bdev_io_cache_size": 256, 00:21:06.360 "bdev_auto_examine": true, 00:21:06.360 "iobuf_small_cache_size": 128, 00:21:06.360 "iobuf_large_cache_size": 16 00:21:06.360 } 00:21:06.360 }, 00:21:06.360 { 00:21:06.360 "method": "bdev_raid_set_options", 00:21:06.360 "params": { 00:21:06.360 "process_window_size_kb": 1024 00:21:06.360 } 00:21:06.360 }, 00:21:06.360 { 00:21:06.360 "method": "bdev_iscsi_set_options", 00:21:06.360 "params": { 00:21:06.360 "timeout_sec": 30 00:21:06.360 } 00:21:06.360 }, 00:21:06.360 { 00:21:06.360 "method": "bdev_nvme_set_options", 00:21:06.360 "params": { 00:21:06.360 "action_on_timeout": "none", 00:21:06.360 "timeout_us": 0, 00:21:06.360 "timeout_admin_us": 0, 00:21:06.360 "keep_alive_timeout_ms": 10000, 00:21:06.360 "arbitration_burst": 0, 00:21:06.360 "low_priority_weight": 0, 00:21:06.360 "medium_priority_weight": 0, 00:21:06.360 "high_priority_weight": 0, 00:21:06.360 "nvme_adminq_poll_period_us": 10000, 00:21:06.360 "nvme_ioq_poll_period_us": 0, 00:21:06.360 "io_queue_requests": 0, 00:21:06.360 "delay_cmd_submit": true, 00:21:06.360 "transport_retry_count": 4, 00:21:06.360 "bdev_retry_count": 3, 00:21:06.360 "transport_ack_timeout": 0, 00:21:06.360 "ctrlr_loss_timeout_sec": 0, 00:21:06.360 "reconnect_delay_sec": 0, 00:21:06.360 "fast_io_fail_timeout_sec": 0, 00:21:06.360 "disable_auto_failback": false, 00:21:06.360 "generate_uuids": false, 00:21:06.360 "transport_tos": 0, 00:21:06.360 "nvme_error_stat": false, 00:21:06.360 "rdma_srq_size": 0, 00:21:06.360 "io_path_stat": false, 00:21:06.360 "allow_accel_sequence": false, 00:21:06.360 "rdma_max_cq_size": 0, 00:21:06.360 "rdma_cm_event_timeout_ms": 0, 00:21:06.360 "dhchap_digests": [ 00:21:06.360 "sha256", 00:21:06.360 "sha384", 00:21:06.360 "sha512" 00:21:06.360 ], 00:21:06.360 "dhchap_dhgroups": [ 00:21:06.360 "null", 00:21:06.360 "ffdhe2048", 00:21:06.360 "ffdhe3072", 00:21:06.360 "ffdhe4096", 00:21:06.360 "ffdhe6144", 00:21:06.360 "ffdhe8192" 00:21:06.360 ] 00:21:06.360 } 00:21:06.360 }, 00:21:06.360 { 00:21:06.360 "method": "bdev_nvme_set_hotplug", 00:21:06.360 "params": { 00:21:06.360 "period_us": 100000, 00:21:06.360 "enable": false 00:21:06.360 } 00:21:06.360 }, 00:21:06.360 { 00:21:06.360 "method": "bdev_malloc_create", 00:21:06.360 "params": { 00:21:06.360 "name": "malloc0", 00:21:06.360 "num_blocks": 8192, 00:21:06.360 "block_size": 4096, 00:21:06.361 "physical_block_size": 4096, 00:21:06.361 "uuid": "dac0ebb8-a50d-4e38-84aa-e64b3b697b7a", 00:21:06.361 "optimal_io_boundary": 0 00:21:06.361 } 00:21:06.361 }, 00:21:06.361 { 00:21:06.361 "method": "bdev_wait_for_examine" 00:21:06.361 } 00:21:06.361 ] 00:21:06.361 }, 00:21:06.361 { 00:21:06.361 "subsystem": "nbd", 00:21:06.361 "config": [] 00:21:06.361 }, 00:21:06.361 { 00:21:06.361 "subsystem": "scheduler", 00:21:06.361 "config": [ 00:21:06.361 { 00:21:06.361 "method": "framework_set_scheduler", 00:21:06.361 "params": { 00:21:06.361 "name": "static" 00:21:06.361 } 00:21:06.361 } 00:21:06.361 ] 00:21:06.361 }, 00:21:06.361 { 00:21:06.361 "subsystem": "nvmf", 00:21:06.361 "config": [ 00:21:06.361 { 00:21:06.361 "method": "nvmf_set_config", 00:21:06.361 "params": { 00:21:06.361 "discovery_filter": "match_any", 00:21:06.361 "admin_cmd_passthru": { 00:21:06.361 "identify_ctrlr": false 00:21:06.361 } 00:21:06.361 } 00:21:06.361 }, 00:21:06.361 { 00:21:06.361 "method": "nvmf_set_max_subsystems", 00:21:06.361 "params": { 00:21:06.361 "max_subsystems": 1024 00:21:06.361 } 00:21:06.361 }, 00:21:06.361 { 00:21:06.361 "method": "nvmf_set_crdt", 00:21:06.361 "params": { 00:21:06.361 "crdt1": 0, 00:21:06.361 "crdt2": 0, 00:21:06.361 "crdt3": 0 00:21:06.361 } 00:21:06.361 }, 00:21:06.361 { 00:21:06.361 "method": "nvmf_create_transport", 00:21:06.361 "params": { 00:21:06.361 "trtype": "TCP", 00:21:06.361 "max_queue_depth": 128, 00:21:06.361 "max_io_qpairs_per_ctrlr": 127, 00:21:06.361 "in_capsule_data_size": 4096, 00:21:06.361 "max_io_size": 131072, 00:21:06.361 "io_unit_size": 131072, 00:21:06.361 "max_aq_depth": 128, 00:21:06.361 "num_shared_buffers": 511, 00:21:06.361 "buf_cache_size": 4294967295, 00:21:06.361 "dif_insert_or_strip": false, 00:21:06.361 "zcopy": false, 00:21:06.361 "c2h_success": false, 00:21:06.361 "sock_priority": 0, 00:21:06.361 "abort_timeout_sec": 1, 00:21:06.361 "ack_timeout": 0, 00:21:06.361 "data_wr_pool_size": 0 00:21:06.361 } 00:21:06.361 }, 00:21:06.361 { 00:21:06.361 "method": "nvmf_create_subsystem", 00:21:06.361 "params": { 00:21:06.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.361 "allow_any_host": false, 00:21:06.361 "serial_number": "SPDK00000000000001", 00:21:06.361 "model_number": "SPDK bdev Controller", 00:21:06.361 "max_namespaces": 10, 00:21:06.361 "min_cntlid": 1, 00:21:06.361 "max_cntlid": 65519, 00:21:06.361 "ana_reporting": false 00:21:06.361 } 00:21:06.361 }, 00:21:06.361 { 00:21:06.361 "method": "nvmf_subsystem_add_host", 00:21:06.361 "params": { 00:21:06.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.361 "host": "nqn.2016-06.io.spdk:host1", 00:21:06.361 "psk": "/tmp/tmp.IV7W06ltnk" 00:21:06.361 } 00:21:06.361 }, 00:21:06.361 { 00:21:06.361 "method": "nvmf_subsystem_add_ns", 00:21:06.361 "params": { 00:21:06.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.361 "namespace": { 00:21:06.361 "nsid": 1, 00:21:06.361 "bdev_name": "malloc0", 00:21:06.361 "nguid": "DAC0EBB8A50D4E3884AAE64B3B697B7A", 00:21:06.361 "uuid": "dac0ebb8-a50d-4e38-84aa-e64b3b697b7a", 00:21:06.361 "no_auto_visible": false 00:21:06.361 } 00:21:06.361 } 00:21:06.361 }, 00:21:06.361 { 00:21:06.361 "method": "nvmf_subsystem_add_listener", 00:21:06.361 "params": { 00:21:06.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.361 "listen_address": { 00:21:06.361 "trtype": "TCP", 00:21:06.361 "adrfam": "IPv4", 00:21:06.361 "traddr": "10.0.0.2", 00:21:06.361 "trsvcid": "4420" 00:21:06.361 }, 00:21:06.361 "secure_channel": true 00:21:06.361 } 00:21:06.361 } 00:21:06.361 ] 00:21:06.361 } 00:21:06.361 ] 00:21:06.361 }' 00:21:06.361 11:47:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.361 11:47:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2006035 00:21:06.361 11:47:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:06.361 11:47:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2006035 00:21:06.361 11:47:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2006035 ']' 00:21:06.361 11:47:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:06.361 11:47:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:06.361 11:47:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:06.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:06.361 11:47:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:06.361 11:47:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.361 [2024-07-15 11:47:34.441000] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:21:06.361 [2024-07-15 11:47:34.441051] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:06.619 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.619 [2024-07-15 11:47:34.515712] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.620 [2024-07-15 11:47:34.588405] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:06.620 [2024-07-15 11:47:34.588447] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:06.620 [2024-07-15 11:47:34.588456] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:06.620 [2024-07-15 11:47:34.588465] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:06.620 [2024-07-15 11:47:34.588472] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:06.620 [2024-07-15 11:47:34.588527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.878 [2024-07-15 11:47:34.791053] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.878 [2024-07-15 11:47:34.807037] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:06.878 [2024-07-15 11:47:34.823083] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:06.878 [2024-07-15 11:47:34.832213] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.137 11:47:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:07.137 11:47:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:07.137 11:47:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:07.137 11:47:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:07.137 11:47:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.396 11:47:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.396 11:47:35 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2006312 00:21:07.396 11:47:35 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2006312 /var/tmp/bdevperf.sock 00:21:07.396 11:47:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2006312 ']' 00:21:07.396 11:47:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.396 11:47:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:07.397 11:47:35 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:07.397 11:47:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.397 11:47:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:07.397 11:47:35 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:07.397 "subsystems": [ 00:21:07.397 { 00:21:07.397 "subsystem": "keyring", 00:21:07.397 "config": [] 00:21:07.397 }, 00:21:07.397 { 00:21:07.397 "subsystem": "iobuf", 00:21:07.397 "config": [ 00:21:07.397 { 00:21:07.397 "method": "iobuf_set_options", 00:21:07.397 "params": { 00:21:07.397 "small_pool_count": 8192, 00:21:07.397 "large_pool_count": 1024, 00:21:07.397 "small_bufsize": 8192, 00:21:07.397 "large_bufsize": 135168 00:21:07.397 } 00:21:07.397 } 00:21:07.397 ] 00:21:07.397 }, 00:21:07.397 { 00:21:07.397 "subsystem": "sock", 00:21:07.397 "config": [ 00:21:07.397 { 00:21:07.397 "method": "sock_set_default_impl", 00:21:07.397 "params": { 00:21:07.397 "impl_name": "posix" 00:21:07.397 } 00:21:07.397 }, 00:21:07.397 { 00:21:07.397 "method": "sock_impl_set_options", 00:21:07.397 "params": { 00:21:07.397 "impl_name": "ssl", 00:21:07.397 "recv_buf_size": 4096, 00:21:07.397 "send_buf_size": 4096, 00:21:07.397 "enable_recv_pipe": true, 00:21:07.397 "enable_quickack": false, 00:21:07.397 "enable_placement_id": 0, 00:21:07.397 "enable_zerocopy_send_server": true, 00:21:07.397 "enable_zerocopy_send_client": false, 00:21:07.397 "zerocopy_threshold": 0, 00:21:07.397 "tls_version": 0, 00:21:07.397 "enable_ktls": false 00:21:07.397 } 00:21:07.397 }, 00:21:07.397 { 00:21:07.397 "method": "sock_impl_set_options", 00:21:07.397 "params": { 00:21:07.397 "impl_name": "posix", 00:21:07.397 "recv_buf_size": 2097152, 00:21:07.397 "send_buf_size": 2097152, 00:21:07.397 "enable_recv_pipe": true, 00:21:07.397 "enable_quickack": false, 00:21:07.397 "enable_placement_id": 0, 00:21:07.397 "enable_zerocopy_send_server": true, 00:21:07.397 "enable_zerocopy_send_client": false, 00:21:07.397 "zerocopy_threshold": 0, 00:21:07.397 "tls_version": 0, 00:21:07.397 "enable_ktls": false 00:21:07.397 } 00:21:07.397 } 00:21:07.397 ] 00:21:07.397 }, 00:21:07.397 { 00:21:07.397 "subsystem": "vmd", 00:21:07.397 "config": [] 00:21:07.397 }, 00:21:07.397 { 00:21:07.397 "subsystem": "accel", 00:21:07.397 "config": [ 00:21:07.397 { 00:21:07.397 "method": "accel_set_options", 00:21:07.397 "params": { 00:21:07.397 "small_cache_size": 128, 00:21:07.397 "large_cache_size": 16, 00:21:07.397 "task_count": 2048, 00:21:07.397 "sequence_count": 2048, 00:21:07.397 "buf_count": 2048 00:21:07.397 } 00:21:07.397 } 00:21:07.397 ] 00:21:07.397 }, 00:21:07.397 { 00:21:07.397 "subsystem": "bdev", 00:21:07.397 "config": [ 00:21:07.397 { 00:21:07.397 "method": "bdev_set_options", 00:21:07.397 "params": { 00:21:07.397 "bdev_io_pool_size": 65535, 00:21:07.397 "bdev_io_cache_size": 256, 00:21:07.397 "bdev_auto_examine": true, 00:21:07.397 "iobuf_small_cache_size": 128, 00:21:07.397 "iobuf_large_cache_size": 16 00:21:07.397 } 00:21:07.397 }, 00:21:07.397 { 00:21:07.397 "method": "bdev_raid_set_options", 00:21:07.397 "params": { 00:21:07.397 "process_window_size_kb": 1024 00:21:07.397 } 00:21:07.397 }, 00:21:07.397 { 00:21:07.397 "method": "bdev_iscsi_set_options", 00:21:07.397 "params": { 00:21:07.397 "timeout_sec": 30 00:21:07.397 } 00:21:07.397 }, 00:21:07.397 { 00:21:07.397 "method": "bdev_nvme_set_options", 00:21:07.397 "params": { 00:21:07.397 "action_on_timeout": "none", 00:21:07.397 "timeout_us": 0, 00:21:07.397 "timeout_admin_us": 0, 00:21:07.397 "keep_alive_timeout_ms": 10000, 00:21:07.397 "arbitration_burst": 0, 00:21:07.397 "low_priority_weight": 0, 00:21:07.397 "medium_priority_weight": 0, 00:21:07.397 "high_priority_weight": 0, 00:21:07.397 "nvme_adminq_poll_period_us": 10000, 00:21:07.397 "nvme_ioq_poll_period_us": 0, 00:21:07.397 "io_queue_requests": 512, 00:21:07.397 "delay_cmd_submit": true, 00:21:07.397 "transport_retry_count": 4, 00:21:07.397 "bdev_retry_count": 3, 00:21:07.397 "transport_ack_timeout": 0, 00:21:07.397 "ctrlr_loss_timeout_sec": 0, 00:21:07.397 "reconnect_delay_sec": 0, 00:21:07.397 "fast_io_fail_timeout_sec": 0, 00:21:07.397 "disable_auto_failback": false, 00:21:07.397 "generate_uuids": false, 00:21:07.397 "transport_tos": 0, 00:21:07.397 "nvme_error_stat": false, 00:21:07.397 "rdma_srq_size": 0, 00:21:07.397 "io_path_stat": false, 00:21:07.397 "allow_accel_sequence": false, 00:21:07.397 "rdma_max_cq_size": 0, 00:21:07.397 "rdma_cm_event_timeout_ms": 0, 00:21:07.397 "dhchap_digests": [ 00:21:07.397 "sha256", 00:21:07.397 "sha384", 00:21:07.397 "sha512" 00:21:07.397 ], 00:21:07.397 "dhchap_dhgroups": [ 00:21:07.397 "null", 00:21:07.397 "ffdhe2048", 00:21:07.397 "ffdhe3072", 00:21:07.397 "ffdhe4096", 00:21:07.397 "ffdhe6144", 00:21:07.397 "ffdhe8192" 00:21:07.397 ] 00:21:07.397 } 00:21:07.397 }, 00:21:07.397 { 00:21:07.397 "method": "bdev_nvme_attach_controller", 00:21:07.397 "params": { 00:21:07.397 "name": "TLSTEST", 00:21:07.397 "trtype": "TCP", 00:21:07.397 "adrfam": "IPv4", 00:21:07.397 "traddr": "10.0.0.2", 00:21:07.397 "trsvcid": "4420", 00:21:07.397 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.397 "prchk_reftag": false, 00:21:07.397 "prchk_guard": false, 00:21:07.397 "ctrlr_loss_timeout_sec": 0, 00:21:07.397 "reconnect_delay_sec": 0, 00:21:07.397 "fast_io_fail_timeout_sec": 0, 00:21:07.397 "psk": "/tmp/tmp.IV7W06ltnk", 00:21:07.397 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.397 "hdgst": false, 00:21:07.397 "ddgst": false 00:21:07.397 } 00:21:07.397 }, 00:21:07.397 { 00:21:07.397 "method": "bdev_nvme_set_hotplug", 00:21:07.397 "params": { 00:21:07.397 "period_us": 100000, 00:21:07.397 "enable": false 00:21:07.397 } 00:21:07.397 }, 00:21:07.397 { 00:21:07.397 "method": "bdev_wait_for_examine" 00:21:07.397 } 00:21:07.397 ] 00:21:07.397 }, 00:21:07.397 { 00:21:07.397 "subsystem": "nbd", 00:21:07.397 "config": [] 00:21:07.397 } 00:21:07.397 ] 00:21:07.397 }' 00:21:07.397 11:47:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.397 [2024-07-15 11:47:35.324439] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:21:07.397 [2024-07-15 11:47:35.324489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2006312 ] 00:21:07.397 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.397 [2024-07-15 11:47:35.390142] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.397 [2024-07-15 11:47:35.458944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.656 [2024-07-15 11:47:35.601706] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:07.656 [2024-07-15 11:47:35.601794] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:08.224 11:47:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:08.224 11:47:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:08.224 11:47:36 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:08.224 Running I/O for 10 seconds... 00:21:18.199 00:21:18.200 Latency(us) 00:21:18.200 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.200 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:18.200 Verification LBA range: start 0x0 length 0x2000 00:21:18.200 TLSTESTn1 : 10.02 4709.43 18.40 0.00 0.00 27130.76 6920.60 49283.07 00:21:18.200 =================================================================================================================== 00:21:18.200 Total : 4709.43 18.40 0.00 0.00 27130.76 6920.60 49283.07 00:21:18.200 0 00:21:18.200 11:47:46 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:18.200 11:47:46 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2006312 00:21:18.200 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2006312 ']' 00:21:18.200 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2006312 00:21:18.200 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:18.200 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:18.200 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2006312 00:21:18.458 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:18.458 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:18.458 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2006312' 00:21:18.458 killing process with pid 2006312 00:21:18.458 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2006312 00:21:18.458 Received shutdown signal, test time was about 10.000000 seconds 00:21:18.458 00:21:18.458 Latency(us) 00:21:18.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.458 =================================================================================================================== 00:21:18.458 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:18.458 [2024-07-15 11:47:46.326577] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:18.458 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2006312 00:21:18.458 11:47:46 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2006035 00:21:18.458 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2006035 ']' 00:21:18.458 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2006035 00:21:18.458 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:18.458 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:18.459 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2006035 00:21:18.459 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:18.459 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:18.459 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2006035' 00:21:18.459 killing process with pid 2006035 00:21:18.459 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2006035 00:21:18.459 [2024-07-15 11:47:46.558805] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:18.459 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2006035 00:21:18.717 11:47:46 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:18.717 11:47:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:18.717 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:18.717 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.717 11:47:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2008162 00:21:18.718 11:47:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:18.718 11:47:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2008162 00:21:18.718 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2008162 ']' 00:21:18.718 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.718 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:18.718 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.718 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:18.718 11:47:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.718 [2024-07-15 11:47:46.799616] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:21:18.718 [2024-07-15 11:47:46.799663] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.976 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.976 [2024-07-15 11:47:46.871235] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.976 [2024-07-15 11:47:46.941949] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.976 [2024-07-15 11:47:46.941988] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.976 [2024-07-15 11:47:46.941997] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.976 [2024-07-15 11:47:46.942005] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.976 [2024-07-15 11:47:46.942012] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.976 [2024-07-15 11:47:46.942032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.544 11:47:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:19.544 11:47:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:19.544 11:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:19.544 11:47:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:19.544 11:47:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.544 11:47:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.544 11:47:47 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.IV7W06ltnk 00:21:19.544 11:47:47 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.IV7W06ltnk 00:21:19.544 11:47:47 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:19.802 [2024-07-15 11:47:47.807665] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.802 11:47:47 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:20.060 11:47:47 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:20.060 [2024-07-15 11:47:48.144501] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:20.061 [2024-07-15 11:47:48.144702] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:20.061 11:47:48 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:20.320 malloc0 00:21:20.320 11:47:48 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:20.578 11:47:48 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.IV7W06ltnk 00:21:20.578 [2024-07-15 11:47:48.629976] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:20.578 11:47:48 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:20.578 11:47:48 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2008457 00:21:20.578 11:47:48 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:20.578 11:47:48 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2008457 /var/tmp/bdevperf.sock 00:21:20.578 11:47:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2008457 ']' 00:21:20.578 11:47:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:20.578 11:47:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:20.578 11:47:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:20.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:20.578 11:47:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:20.578 11:47:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.836 [2024-07-15 11:47:48.684146] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:21:20.836 [2024-07-15 11:47:48.684197] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2008457 ] 00:21:20.836 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.836 [2024-07-15 11:47:48.753209] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.836 [2024-07-15 11:47:48.823321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.403 11:47:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:21.403 11:47:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:21.403 11:47:49 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IV7W06ltnk 00:21:21.661 11:47:49 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:21.922 [2024-07-15 11:47:49.809584] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:21.922 nvme0n1 00:21:21.922 11:47:49 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:21.922 Running I/O for 1 seconds... 00:21:23.300 00:21:23.300 Latency(us) 00:21:23.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.300 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:23.300 Verification LBA range: start 0x0 length 0x2000 00:21:23.300 nvme0n1 : 1.03 4293.51 16.77 0.00 0.00 29468.63 4639.95 91016.40 00:21:23.300 =================================================================================================================== 00:21:23.300 Total : 4293.51 16.77 0.00 0.00 29468.63 4639.95 91016.40 00:21:23.300 0 00:21:23.300 11:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2008457 00:21:23.300 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2008457 ']' 00:21:23.300 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2008457 00:21:23.300 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:23.300 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:23.300 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2008457 00:21:23.300 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:23.300 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:23.300 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2008457' 00:21:23.300 killing process with pid 2008457 00:21:23.300 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2008457 00:21:23.300 Received shutdown signal, test time was about 1.000000 seconds 00:21:23.300 00:21:23.300 Latency(us) 00:21:23.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.300 =================================================================================================================== 00:21:23.300 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:23.300 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2008457 00:21:23.300 11:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2008162 00:21:23.300 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2008162 ']' 00:21:23.300 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2008162 00:21:23.300 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:23.300 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:23.300 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2008162 00:21:23.300 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:23.300 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:23.300 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2008162' 00:21:23.300 killing process with pid 2008162 00:21:23.300 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2008162 00:21:23.300 [2024-07-15 11:47:51.304132] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:23.300 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2008162 00:21:23.560 11:47:51 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:21:23.560 11:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:23.560 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:23.560 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.560 11:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2008995 00:21:23.560 11:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:23.560 11:47:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2008995 00:21:23.560 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2008995 ']' 00:21:23.560 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.560 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.560 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.560 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.560 11:47:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.560 [2024-07-15 11:47:51.547499] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:21:23.560 [2024-07-15 11:47:51.547550] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.560 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.560 [2024-07-15 11:47:51.619299] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.820 [2024-07-15 11:47:51.691959] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.820 [2024-07-15 11:47:51.691996] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.820 [2024-07-15 11:47:51.692006] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.820 [2024-07-15 11:47:51.692014] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.820 [2024-07-15 11:47:51.692020] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.820 [2024-07-15 11:47:51.692042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.389 11:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:24.389 11:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:24.389 11:47:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:24.389 11:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:24.389 11:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.389 11:47:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.389 11:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:21:24.389 11:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.389 11:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.389 [2024-07-15 11:47:52.398128] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.389 malloc0 00:21:24.389 [2024-07-15 11:47:52.426572] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:24.389 [2024-07-15 11:47:52.426779] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.389 11:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.389 11:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2009236 00:21:24.389 11:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:24.389 11:47:52 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2009236 /var/tmp/bdevperf.sock 00:21:24.389 11:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2009236 ']' 00:21:24.389 11:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:24.389 11:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:24.389 11:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:24.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:24.389 11:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:24.389 11:47:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.648 [2024-07-15 11:47:52.504188] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:21:24.648 [2024-07-15 11:47:52.504235] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2009236 ] 00:21:24.648 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.648 [2024-07-15 11:47:52.575350] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.648 [2024-07-15 11:47:52.649866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.217 11:47:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:25.217 11:47:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:25.217 11:47:53 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.IV7W06ltnk 00:21:25.476 11:47:53 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:25.734 [2024-07-15 11:47:53.621136] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:25.734 nvme0n1 00:21:25.734 11:47:53 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:25.734 Running I/O for 1 seconds... 00:21:27.112 00:21:27.112 Latency(us) 00:21:27.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.112 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:27.112 Verification LBA range: start 0x0 length 0x2000 00:21:27.112 nvme0n1 : 1.03 4233.16 16.54 0.00 0.00 29879.00 6343.88 87660.95 00:21:27.112 =================================================================================================================== 00:21:27.112 Total : 4233.16 16.54 0.00 0.00 29879.00 6343.88 87660.95 00:21:27.112 0 00:21:27.112 11:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:21:27.112 11:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.112 11:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.112 11:47:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.112 11:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:21:27.112 "subsystems": [ 00:21:27.112 { 00:21:27.112 "subsystem": "keyring", 00:21:27.112 "config": [ 00:21:27.112 { 00:21:27.112 "method": "keyring_file_add_key", 00:21:27.112 "params": { 00:21:27.112 "name": "key0", 00:21:27.112 "path": "/tmp/tmp.IV7W06ltnk" 00:21:27.112 } 00:21:27.112 } 00:21:27.112 ] 00:21:27.112 }, 00:21:27.112 { 00:21:27.112 "subsystem": "iobuf", 00:21:27.112 "config": [ 00:21:27.112 { 00:21:27.112 "method": "iobuf_set_options", 00:21:27.112 "params": { 00:21:27.112 "small_pool_count": 8192, 00:21:27.112 "large_pool_count": 1024, 00:21:27.112 "small_bufsize": 8192, 00:21:27.112 "large_bufsize": 135168 00:21:27.112 } 00:21:27.112 } 00:21:27.112 ] 00:21:27.112 }, 00:21:27.112 { 00:21:27.112 "subsystem": "sock", 00:21:27.112 "config": [ 00:21:27.112 { 00:21:27.112 "method": "sock_set_default_impl", 00:21:27.112 "params": { 00:21:27.112 "impl_name": "posix" 00:21:27.112 } 00:21:27.112 }, 00:21:27.112 { 00:21:27.112 "method": "sock_impl_set_options", 00:21:27.112 "params": { 00:21:27.112 "impl_name": "ssl", 00:21:27.112 "recv_buf_size": 4096, 00:21:27.112 "send_buf_size": 4096, 00:21:27.112 "enable_recv_pipe": true, 00:21:27.112 "enable_quickack": false, 00:21:27.112 "enable_placement_id": 0, 00:21:27.112 "enable_zerocopy_send_server": true, 00:21:27.112 "enable_zerocopy_send_client": false, 00:21:27.112 "zerocopy_threshold": 0, 00:21:27.112 "tls_version": 0, 00:21:27.112 "enable_ktls": false 00:21:27.112 } 00:21:27.112 }, 00:21:27.112 { 00:21:27.112 "method": "sock_impl_set_options", 00:21:27.112 "params": { 00:21:27.112 "impl_name": "posix", 00:21:27.112 "recv_buf_size": 2097152, 00:21:27.112 "send_buf_size": 2097152, 00:21:27.112 "enable_recv_pipe": true, 00:21:27.112 "enable_quickack": false, 00:21:27.112 "enable_placement_id": 0, 00:21:27.112 "enable_zerocopy_send_server": true, 00:21:27.112 "enable_zerocopy_send_client": false, 00:21:27.112 "zerocopy_threshold": 0, 00:21:27.112 "tls_version": 0, 00:21:27.112 "enable_ktls": false 00:21:27.112 } 00:21:27.112 } 00:21:27.112 ] 00:21:27.112 }, 00:21:27.112 { 00:21:27.112 "subsystem": "vmd", 00:21:27.112 "config": [] 00:21:27.112 }, 00:21:27.112 { 00:21:27.112 "subsystem": "accel", 00:21:27.112 "config": [ 00:21:27.112 { 00:21:27.112 "method": "accel_set_options", 00:21:27.112 "params": { 00:21:27.112 "small_cache_size": 128, 00:21:27.112 "large_cache_size": 16, 00:21:27.112 "task_count": 2048, 00:21:27.112 "sequence_count": 2048, 00:21:27.112 "buf_count": 2048 00:21:27.112 } 00:21:27.112 } 00:21:27.112 ] 00:21:27.112 }, 00:21:27.112 { 00:21:27.112 "subsystem": "bdev", 00:21:27.112 "config": [ 00:21:27.112 { 00:21:27.112 "method": "bdev_set_options", 00:21:27.112 "params": { 00:21:27.112 "bdev_io_pool_size": 65535, 00:21:27.112 "bdev_io_cache_size": 256, 00:21:27.112 "bdev_auto_examine": true, 00:21:27.112 "iobuf_small_cache_size": 128, 00:21:27.112 "iobuf_large_cache_size": 16 00:21:27.112 } 00:21:27.112 }, 00:21:27.112 { 00:21:27.112 "method": "bdev_raid_set_options", 00:21:27.112 "params": { 00:21:27.112 "process_window_size_kb": 1024 00:21:27.112 } 00:21:27.112 }, 00:21:27.112 { 00:21:27.112 "method": "bdev_iscsi_set_options", 00:21:27.112 "params": { 00:21:27.112 "timeout_sec": 30 00:21:27.112 } 00:21:27.112 }, 00:21:27.112 { 00:21:27.112 "method": "bdev_nvme_set_options", 00:21:27.112 "params": { 00:21:27.112 "action_on_timeout": "none", 00:21:27.112 "timeout_us": 0, 00:21:27.112 "timeout_admin_us": 0, 00:21:27.112 "keep_alive_timeout_ms": 10000, 00:21:27.112 "arbitration_burst": 0, 00:21:27.112 "low_priority_weight": 0, 00:21:27.112 "medium_priority_weight": 0, 00:21:27.112 "high_priority_weight": 0, 00:21:27.112 "nvme_adminq_poll_period_us": 10000, 00:21:27.112 "nvme_ioq_poll_period_us": 0, 00:21:27.112 "io_queue_requests": 0, 00:21:27.112 "delay_cmd_submit": true, 00:21:27.112 "transport_retry_count": 4, 00:21:27.112 "bdev_retry_count": 3, 00:21:27.112 "transport_ack_timeout": 0, 00:21:27.112 "ctrlr_loss_timeout_sec": 0, 00:21:27.112 "reconnect_delay_sec": 0, 00:21:27.112 "fast_io_fail_timeout_sec": 0, 00:21:27.112 "disable_auto_failback": false, 00:21:27.112 "generate_uuids": false, 00:21:27.112 "transport_tos": 0, 00:21:27.112 "nvme_error_stat": false, 00:21:27.112 "rdma_srq_size": 0, 00:21:27.112 "io_path_stat": false, 00:21:27.112 "allow_accel_sequence": false, 00:21:27.112 "rdma_max_cq_size": 0, 00:21:27.112 "rdma_cm_event_timeout_ms": 0, 00:21:27.112 "dhchap_digests": [ 00:21:27.112 "sha256", 00:21:27.112 "sha384", 00:21:27.112 "sha512" 00:21:27.112 ], 00:21:27.112 "dhchap_dhgroups": [ 00:21:27.112 "null", 00:21:27.112 "ffdhe2048", 00:21:27.112 "ffdhe3072", 00:21:27.112 "ffdhe4096", 00:21:27.112 "ffdhe6144", 00:21:27.112 "ffdhe8192" 00:21:27.112 ] 00:21:27.112 } 00:21:27.112 }, 00:21:27.112 { 00:21:27.112 "method": "bdev_nvme_set_hotplug", 00:21:27.112 "params": { 00:21:27.112 "period_us": 100000, 00:21:27.112 "enable": false 00:21:27.112 } 00:21:27.112 }, 00:21:27.112 { 00:21:27.112 "method": "bdev_malloc_create", 00:21:27.112 "params": { 00:21:27.112 "name": "malloc0", 00:21:27.112 "num_blocks": 8192, 00:21:27.112 "block_size": 4096, 00:21:27.112 "physical_block_size": 4096, 00:21:27.112 "uuid": "1f2cac2a-ade8-4b51-b5e3-d4542f2ec9b5", 00:21:27.112 "optimal_io_boundary": 0 00:21:27.112 } 00:21:27.112 }, 00:21:27.112 { 00:21:27.112 "method": "bdev_wait_for_examine" 00:21:27.112 } 00:21:27.112 ] 00:21:27.112 }, 00:21:27.112 { 00:21:27.112 "subsystem": "nbd", 00:21:27.112 "config": [] 00:21:27.112 }, 00:21:27.112 { 00:21:27.112 "subsystem": "scheduler", 00:21:27.112 "config": [ 00:21:27.112 { 00:21:27.112 "method": "framework_set_scheduler", 00:21:27.112 "params": { 00:21:27.112 "name": "static" 00:21:27.112 } 00:21:27.112 } 00:21:27.112 ] 00:21:27.112 }, 00:21:27.112 { 00:21:27.112 "subsystem": "nvmf", 00:21:27.112 "config": [ 00:21:27.112 { 00:21:27.112 "method": "nvmf_set_config", 00:21:27.112 "params": { 00:21:27.112 "discovery_filter": "match_any", 00:21:27.112 "admin_cmd_passthru": { 00:21:27.112 "identify_ctrlr": false 00:21:27.112 } 00:21:27.112 } 00:21:27.112 }, 00:21:27.112 { 00:21:27.112 "method": "nvmf_set_max_subsystems", 00:21:27.112 "params": { 00:21:27.112 "max_subsystems": 1024 00:21:27.112 } 00:21:27.112 }, 00:21:27.112 { 00:21:27.112 "method": "nvmf_set_crdt", 00:21:27.112 "params": { 00:21:27.113 "crdt1": 0, 00:21:27.113 "crdt2": 0, 00:21:27.113 "crdt3": 0 00:21:27.113 } 00:21:27.113 }, 00:21:27.113 { 00:21:27.113 "method": "nvmf_create_transport", 00:21:27.113 "params": { 00:21:27.113 "trtype": "TCP", 00:21:27.113 "max_queue_depth": 128, 00:21:27.113 "max_io_qpairs_per_ctrlr": 127, 00:21:27.113 "in_capsule_data_size": 4096, 00:21:27.113 "max_io_size": 131072, 00:21:27.113 "io_unit_size": 131072, 00:21:27.113 "max_aq_depth": 128, 00:21:27.113 "num_shared_buffers": 511, 00:21:27.113 "buf_cache_size": 4294967295, 00:21:27.113 "dif_insert_or_strip": false, 00:21:27.113 "zcopy": false, 00:21:27.113 "c2h_success": false, 00:21:27.113 "sock_priority": 0, 00:21:27.113 "abort_timeout_sec": 1, 00:21:27.113 "ack_timeout": 0, 00:21:27.113 "data_wr_pool_size": 0 00:21:27.113 } 00:21:27.113 }, 00:21:27.113 { 00:21:27.113 "method": "nvmf_create_subsystem", 00:21:27.113 "params": { 00:21:27.113 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.113 "allow_any_host": false, 00:21:27.113 "serial_number": "00000000000000000000", 00:21:27.113 "model_number": "SPDK bdev Controller", 00:21:27.113 "max_namespaces": 32, 00:21:27.113 "min_cntlid": 1, 00:21:27.113 "max_cntlid": 65519, 00:21:27.113 "ana_reporting": false 00:21:27.113 } 00:21:27.113 }, 00:21:27.113 { 00:21:27.113 "method": "nvmf_subsystem_add_host", 00:21:27.113 "params": { 00:21:27.113 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.113 "host": "nqn.2016-06.io.spdk:host1", 00:21:27.113 "psk": "key0" 00:21:27.113 } 00:21:27.113 }, 00:21:27.113 { 00:21:27.113 "method": "nvmf_subsystem_add_ns", 00:21:27.113 "params": { 00:21:27.113 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.113 "namespace": { 00:21:27.113 "nsid": 1, 00:21:27.113 "bdev_name": "malloc0", 00:21:27.113 "nguid": "1F2CAC2AADE84B51B5E3D4542F2EC9B5", 00:21:27.113 "uuid": "1f2cac2a-ade8-4b51-b5e3-d4542f2ec9b5", 00:21:27.113 "no_auto_visible": false 00:21:27.113 } 00:21:27.113 } 00:21:27.113 }, 00:21:27.113 { 00:21:27.113 "method": "nvmf_subsystem_add_listener", 00:21:27.113 "params": { 00:21:27.113 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.113 "listen_address": { 00:21:27.113 "trtype": "TCP", 00:21:27.113 "adrfam": "IPv4", 00:21:27.113 "traddr": "10.0.0.2", 00:21:27.113 "trsvcid": "4420" 00:21:27.113 }, 00:21:27.113 "secure_channel": true 00:21:27.113 } 00:21:27.113 } 00:21:27.113 ] 00:21:27.113 } 00:21:27.113 ] 00:21:27.113 }' 00:21:27.113 11:47:54 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:27.113 11:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:21:27.113 "subsystems": [ 00:21:27.113 { 00:21:27.113 "subsystem": "keyring", 00:21:27.113 "config": [ 00:21:27.113 { 00:21:27.113 "method": "keyring_file_add_key", 00:21:27.113 "params": { 00:21:27.113 "name": "key0", 00:21:27.113 "path": "/tmp/tmp.IV7W06ltnk" 00:21:27.113 } 00:21:27.113 } 00:21:27.113 ] 00:21:27.113 }, 00:21:27.113 { 00:21:27.113 "subsystem": "iobuf", 00:21:27.113 "config": [ 00:21:27.113 { 00:21:27.113 "method": "iobuf_set_options", 00:21:27.113 "params": { 00:21:27.113 "small_pool_count": 8192, 00:21:27.113 "large_pool_count": 1024, 00:21:27.113 "small_bufsize": 8192, 00:21:27.113 "large_bufsize": 135168 00:21:27.113 } 00:21:27.113 } 00:21:27.113 ] 00:21:27.113 }, 00:21:27.113 { 00:21:27.113 "subsystem": "sock", 00:21:27.113 "config": [ 00:21:27.113 { 00:21:27.113 "method": "sock_set_default_impl", 00:21:27.113 "params": { 00:21:27.113 "impl_name": "posix" 00:21:27.113 } 00:21:27.113 }, 00:21:27.113 { 00:21:27.113 "method": "sock_impl_set_options", 00:21:27.113 "params": { 00:21:27.113 "impl_name": "ssl", 00:21:27.113 "recv_buf_size": 4096, 00:21:27.113 "send_buf_size": 4096, 00:21:27.113 "enable_recv_pipe": true, 00:21:27.113 "enable_quickack": false, 00:21:27.113 "enable_placement_id": 0, 00:21:27.113 "enable_zerocopy_send_server": true, 00:21:27.113 "enable_zerocopy_send_client": false, 00:21:27.113 "zerocopy_threshold": 0, 00:21:27.113 "tls_version": 0, 00:21:27.113 "enable_ktls": false 00:21:27.113 } 00:21:27.113 }, 00:21:27.113 { 00:21:27.113 "method": "sock_impl_set_options", 00:21:27.113 "params": { 00:21:27.113 "impl_name": "posix", 00:21:27.113 "recv_buf_size": 2097152, 00:21:27.113 "send_buf_size": 2097152, 00:21:27.113 "enable_recv_pipe": true, 00:21:27.113 "enable_quickack": false, 00:21:27.113 "enable_placement_id": 0, 00:21:27.113 "enable_zerocopy_send_server": true, 00:21:27.113 "enable_zerocopy_send_client": false, 00:21:27.113 "zerocopy_threshold": 0, 00:21:27.113 "tls_version": 0, 00:21:27.113 "enable_ktls": false 00:21:27.113 } 00:21:27.113 } 00:21:27.113 ] 00:21:27.113 }, 00:21:27.113 { 00:21:27.113 "subsystem": "vmd", 00:21:27.113 "config": [] 00:21:27.113 }, 00:21:27.113 { 00:21:27.113 "subsystem": "accel", 00:21:27.113 "config": [ 00:21:27.113 { 00:21:27.113 "method": "accel_set_options", 00:21:27.113 "params": { 00:21:27.113 "small_cache_size": 128, 00:21:27.113 "large_cache_size": 16, 00:21:27.113 "task_count": 2048, 00:21:27.113 "sequence_count": 2048, 00:21:27.113 "buf_count": 2048 00:21:27.113 } 00:21:27.113 } 00:21:27.113 ] 00:21:27.113 }, 00:21:27.113 { 00:21:27.113 "subsystem": "bdev", 00:21:27.113 "config": [ 00:21:27.113 { 00:21:27.113 "method": "bdev_set_options", 00:21:27.113 "params": { 00:21:27.113 "bdev_io_pool_size": 65535, 00:21:27.113 "bdev_io_cache_size": 256, 00:21:27.113 "bdev_auto_examine": true, 00:21:27.113 "iobuf_small_cache_size": 128, 00:21:27.113 "iobuf_large_cache_size": 16 00:21:27.113 } 00:21:27.113 }, 00:21:27.113 { 00:21:27.113 "method": "bdev_raid_set_options", 00:21:27.113 "params": { 00:21:27.113 "process_window_size_kb": 1024 00:21:27.113 } 00:21:27.113 }, 00:21:27.113 { 00:21:27.113 "method": "bdev_iscsi_set_options", 00:21:27.113 "params": { 00:21:27.113 "timeout_sec": 30 00:21:27.113 } 00:21:27.113 }, 00:21:27.113 { 00:21:27.113 "method": "bdev_nvme_set_options", 00:21:27.113 "params": { 00:21:27.114 "action_on_timeout": "none", 00:21:27.114 "timeout_us": 0, 00:21:27.114 "timeout_admin_us": 0, 00:21:27.114 "keep_alive_timeout_ms": 10000, 00:21:27.114 "arbitration_burst": 0, 00:21:27.114 "low_priority_weight": 0, 00:21:27.114 "medium_priority_weight": 0, 00:21:27.114 "high_priority_weight": 0, 00:21:27.114 "nvme_adminq_poll_period_us": 10000, 00:21:27.114 "nvme_ioq_poll_period_us": 0, 00:21:27.114 "io_queue_requests": 512, 00:21:27.114 "delay_cmd_submit": true, 00:21:27.114 "transport_retry_count": 4, 00:21:27.114 "bdev_retry_count": 3, 00:21:27.114 "transport_ack_timeout": 0, 00:21:27.114 "ctrlr_loss_timeout_sec": 0, 00:21:27.114 "reconnect_delay_sec": 0, 00:21:27.114 "fast_io_fail_timeout_sec": 0, 00:21:27.114 "disable_auto_failback": false, 00:21:27.114 "generate_uuids": false, 00:21:27.114 "transport_tos": 0, 00:21:27.114 "nvme_error_stat": false, 00:21:27.114 "rdma_srq_size": 0, 00:21:27.114 "io_path_stat": false, 00:21:27.114 "allow_accel_sequence": false, 00:21:27.114 "rdma_max_cq_size": 0, 00:21:27.114 "rdma_cm_event_timeout_ms": 0, 00:21:27.114 "dhchap_digests": [ 00:21:27.114 "sha256", 00:21:27.114 "sha384", 00:21:27.114 "sha512" 00:21:27.114 ], 00:21:27.114 "dhchap_dhgroups": [ 00:21:27.114 "null", 00:21:27.114 "ffdhe2048", 00:21:27.114 "ffdhe3072", 00:21:27.114 "ffdhe4096", 00:21:27.114 "ffdhe6144", 00:21:27.114 "ffdhe8192" 00:21:27.114 ] 00:21:27.114 } 00:21:27.114 }, 00:21:27.114 { 00:21:27.114 "method": "bdev_nvme_attach_controller", 00:21:27.114 "params": { 00:21:27.114 "name": "nvme0", 00:21:27.114 "trtype": "TCP", 00:21:27.114 "adrfam": "IPv4", 00:21:27.114 "traddr": "10.0.0.2", 00:21:27.114 "trsvcid": "4420", 00:21:27.114 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.114 "prchk_reftag": false, 00:21:27.114 "prchk_guard": false, 00:21:27.114 "ctrlr_loss_timeout_sec": 0, 00:21:27.114 "reconnect_delay_sec": 0, 00:21:27.114 "fast_io_fail_timeout_sec": 0, 00:21:27.114 "psk": "key0", 00:21:27.114 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:27.114 "hdgst": false, 00:21:27.114 "ddgst": false 00:21:27.114 } 00:21:27.114 }, 00:21:27.114 { 00:21:27.114 "method": "bdev_nvme_set_hotplug", 00:21:27.114 "params": { 00:21:27.114 "period_us": 100000, 00:21:27.114 "enable": false 00:21:27.114 } 00:21:27.114 }, 00:21:27.114 { 00:21:27.114 "method": "bdev_enable_histogram", 00:21:27.114 "params": { 00:21:27.114 "name": "nvme0n1", 00:21:27.114 "enable": true 00:21:27.114 } 00:21:27.114 }, 00:21:27.114 { 00:21:27.114 "method": "bdev_wait_for_examine" 00:21:27.114 } 00:21:27.114 ] 00:21:27.114 }, 00:21:27.114 { 00:21:27.114 "subsystem": "nbd", 00:21:27.114 "config": [] 00:21:27.114 } 00:21:27.114 ] 00:21:27.114 }' 00:21:27.114 11:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2009236 00:21:27.114 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2009236 ']' 00:21:27.114 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2009236 00:21:27.114 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:27.374 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:27.374 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2009236 00:21:27.374 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:27.374 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:27.374 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2009236' 00:21:27.374 killing process with pid 2009236 00:21:27.374 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2009236 00:21:27.374 Received shutdown signal, test time was about 1.000000 seconds 00:21:27.374 00:21:27.374 Latency(us) 00:21:27.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.374 =================================================================================================================== 00:21:27.374 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:27.374 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2009236 00:21:27.374 11:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2008995 00:21:27.374 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2008995 ']' 00:21:27.374 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2008995 00:21:27.374 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:27.374 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:27.374 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2008995 00:21:27.634 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:27.634 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:27.634 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2008995' 00:21:27.634 killing process with pid 2008995 00:21:27.634 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2008995 00:21:27.634 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2008995 00:21:27.634 11:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:21:27.634 11:47:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:27.634 11:47:55 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:21:27.634 "subsystems": [ 00:21:27.634 { 00:21:27.634 "subsystem": "keyring", 00:21:27.634 "config": [ 00:21:27.634 { 00:21:27.634 "method": "keyring_file_add_key", 00:21:27.634 "params": { 00:21:27.634 "name": "key0", 00:21:27.634 "path": "/tmp/tmp.IV7W06ltnk" 00:21:27.634 } 00:21:27.634 } 00:21:27.634 ] 00:21:27.634 }, 00:21:27.634 { 00:21:27.634 "subsystem": "iobuf", 00:21:27.634 "config": [ 00:21:27.634 { 00:21:27.634 "method": "iobuf_set_options", 00:21:27.634 "params": { 00:21:27.634 "small_pool_count": 8192, 00:21:27.634 "large_pool_count": 1024, 00:21:27.634 "small_bufsize": 8192, 00:21:27.634 "large_bufsize": 135168 00:21:27.634 } 00:21:27.634 } 00:21:27.634 ] 00:21:27.634 }, 00:21:27.634 { 00:21:27.634 "subsystem": "sock", 00:21:27.634 "config": [ 00:21:27.634 { 00:21:27.634 "method": "sock_set_default_impl", 00:21:27.634 "params": { 00:21:27.634 "impl_name": "posix" 00:21:27.634 } 00:21:27.634 }, 00:21:27.634 { 00:21:27.634 "method": "sock_impl_set_options", 00:21:27.634 "params": { 00:21:27.634 "impl_name": "ssl", 00:21:27.634 "recv_buf_size": 4096, 00:21:27.634 "send_buf_size": 4096, 00:21:27.634 "enable_recv_pipe": true, 00:21:27.634 "enable_quickack": false, 00:21:27.634 "enable_placement_id": 0, 00:21:27.634 "enable_zerocopy_send_server": true, 00:21:27.634 "enable_zerocopy_send_client": false, 00:21:27.634 "zerocopy_threshold": 0, 00:21:27.634 "tls_version": 0, 00:21:27.634 "enable_ktls": false 00:21:27.634 } 00:21:27.634 }, 00:21:27.634 { 00:21:27.634 "method": "sock_impl_set_options", 00:21:27.634 "params": { 00:21:27.634 "impl_name": "posix", 00:21:27.634 "recv_buf_size": 2097152, 00:21:27.634 "send_buf_size": 2097152, 00:21:27.634 "enable_recv_pipe": true, 00:21:27.634 "enable_quickack": false, 00:21:27.634 "enable_placement_id": 0, 00:21:27.634 "enable_zerocopy_send_server": true, 00:21:27.634 "enable_zerocopy_send_client": false, 00:21:27.634 "zerocopy_threshold": 0, 00:21:27.634 "tls_version": 0, 00:21:27.634 "enable_ktls": false 00:21:27.634 } 00:21:27.634 } 00:21:27.634 ] 00:21:27.634 }, 00:21:27.634 { 00:21:27.634 "subsystem": "vmd", 00:21:27.634 "config": [] 00:21:27.634 }, 00:21:27.634 { 00:21:27.634 "subsystem": "accel", 00:21:27.634 "config": [ 00:21:27.634 { 00:21:27.634 "method": "accel_set_options", 00:21:27.634 "params": { 00:21:27.634 "small_cache_size": 128, 00:21:27.634 "large_cache_size": 16, 00:21:27.634 "task_count": 2048, 00:21:27.634 "sequence_count": 2048, 00:21:27.634 "buf_count": 2048 00:21:27.634 } 00:21:27.634 } 00:21:27.634 ] 00:21:27.634 }, 00:21:27.634 { 00:21:27.634 "subsystem": "bdev", 00:21:27.634 "config": [ 00:21:27.634 { 00:21:27.634 "method": "bdev_set_options", 00:21:27.634 "params": { 00:21:27.634 "bdev_io_pool_size": 65535, 00:21:27.634 "bdev_io_cache_size": 256, 00:21:27.634 "bdev_auto_examine": true, 00:21:27.634 "iobuf_small_cache_size": 128, 00:21:27.634 "iobuf_large_cache_size": 16 00:21:27.634 } 00:21:27.634 }, 00:21:27.634 { 00:21:27.634 "method": "bdev_raid_set_options", 00:21:27.634 "params": { 00:21:27.634 "process_window_size_kb": 1024 00:21:27.634 } 00:21:27.634 }, 00:21:27.634 { 00:21:27.634 "method": "bdev_iscsi_set_options", 00:21:27.634 "params": { 00:21:27.634 "timeout_sec": 30 00:21:27.634 } 00:21:27.634 }, 00:21:27.634 { 00:21:27.634 "method": "bdev_nvme_set_options", 00:21:27.634 "params": { 00:21:27.634 "action_on_timeout": "none", 00:21:27.634 "timeout_us": 0, 00:21:27.634 "timeout_admin_us": 0, 00:21:27.634 "keep_alive_timeout_ms": 10000, 00:21:27.634 "arbitration_burst": 0, 00:21:27.634 "low_priority_weight": 0, 00:21:27.634 "medium_priority_weight": 0, 00:21:27.634 "high_priority_weight": 0, 00:21:27.634 "nvme_adminq_poll_period_us": 10000, 00:21:27.634 "nvme_ioq_poll_period_us": 0, 00:21:27.634 "io_queue_requests": 0, 00:21:27.634 "delay_cmd_submit": true, 00:21:27.634 "transport_retry_count": 4, 00:21:27.634 "bdev_retry_count": 3, 00:21:27.634 "transport_ack_timeout": 0, 00:21:27.634 "ctrlr_loss_timeout_sec": 0, 00:21:27.634 "reconnect_delay_sec": 0, 00:21:27.634 "fast_io_fail_timeout_sec": 0, 00:21:27.634 "disable_auto_failback": false, 00:21:27.634 "generate_uuids": false, 00:21:27.634 "transport_tos": 0, 00:21:27.634 "nvme_error_stat": false, 00:21:27.634 "rdma_srq_size": 0, 00:21:27.634 "io_path_stat": false, 00:21:27.634 "allow_accel_sequence": false, 00:21:27.634 "rdma_max_cq_size": 0, 00:21:27.634 "rdma_cm_event_timeout_ms": 0, 00:21:27.634 "dhchap_digests": [ 00:21:27.634 "sha256", 00:21:27.634 "sha384", 00:21:27.634 "sha512" 00:21:27.634 ], 00:21:27.634 "dhchap_dhgroups": [ 00:21:27.634 "null", 00:21:27.634 "ffdhe2048", 00:21:27.634 "ffdhe3072", 00:21:27.634 "ffdhe4096", 00:21:27.634 "ffdhe6144", 00:21:27.634 "ffdhe8192" 00:21:27.634 ] 00:21:27.634 } 00:21:27.634 }, 00:21:27.634 { 00:21:27.634 "method": "bdev_nvme_set_hotplug", 00:21:27.634 "params": { 00:21:27.634 "period_us": 100000, 00:21:27.634 "enable": false 00:21:27.634 } 00:21:27.634 }, 00:21:27.634 { 00:21:27.634 "method": "bdev_malloc_create", 00:21:27.634 "params": { 00:21:27.634 "name": "malloc0", 00:21:27.634 "num_blocks": 8192, 00:21:27.634 "block_size": 4096, 00:21:27.634 "physical_block_size": 4096, 00:21:27.634 "uuid": "1f2cac2a-ade8-4b51-b5e3-d4542f2ec9b5", 00:21:27.634 "optimal_io_boundary": 0 00:21:27.634 } 00:21:27.634 }, 00:21:27.634 { 00:21:27.634 "method": "bdev_wait_for_examine" 00:21:27.634 } 00:21:27.634 ] 00:21:27.634 }, 00:21:27.634 { 00:21:27.634 "subsystem": "nbd", 00:21:27.634 "config": [] 00:21:27.634 }, 00:21:27.634 { 00:21:27.634 "subsystem": "scheduler", 00:21:27.634 "config": [ 00:21:27.634 { 00:21:27.634 "method": "framework_set_scheduler", 00:21:27.634 "params": { 00:21:27.634 "name": "static" 00:21:27.634 } 00:21:27.634 } 00:21:27.634 ] 00:21:27.634 }, 00:21:27.634 { 00:21:27.634 "subsystem": "nvmf", 00:21:27.634 "config": [ 00:21:27.635 { 00:21:27.635 "method": "nvmf_set_config", 00:21:27.635 "params": { 00:21:27.635 "discovery_filter": "match_any", 00:21:27.635 "admin_cmd_passthru": { 00:21:27.635 "identify_ctrlr": false 00:21:27.635 } 00:21:27.635 } 00:21:27.635 }, 00:21:27.635 { 00:21:27.635 "method": "nvmf_set_max_subsystems", 00:21:27.635 "params": { 00:21:27.635 "max_subsystems": 1024 00:21:27.635 } 00:21:27.635 }, 00:21:27.635 { 00:21:27.635 "method": "nvmf_set_crdt", 00:21:27.635 "params": { 00:21:27.635 "crdt1": 0, 00:21:27.635 "crdt2": 0, 00:21:27.635 "crdt3": 0 00:21:27.635 } 00:21:27.635 }, 00:21:27.635 { 00:21:27.635 "method": "nvmf_create_transport", 00:21:27.635 "params": { 00:21:27.635 "trtype": "TCP", 00:21:27.635 "max_queue_depth": 128, 00:21:27.635 "max_io_qpairs_per_ctrlr": 127, 00:21:27.635 "in_capsule_data_size": 4096, 00:21:27.635 "max_io_size": 131072, 00:21:27.635 "io_unit_size": 131072, 00:21:27.635 "max_aq_depth": 128, 00:21:27.635 "num_shared_buffers": 511, 00:21:27.635 "buf_cache_size": 4294967295, 00:21:27.635 "dif_insert_or_strip": false, 00:21:27.635 "zcopy": false, 00:21:27.635 "c2h_success": false, 00:21:27.635 "sock_priority": 0, 00:21:27.635 "abort_timeout_sec": 1, 00:21:27.635 "ack_timeout": 0, 00:21:27.635 "data_wr_pool_size": 0 00:21:27.635 } 00:21:27.635 }, 00:21:27.635 { 00:21:27.635 "method": "nvmf_create_subsystem", 00:21:27.635 "params": { 00:21:27.635 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.635 "allow_any_host": false, 00:21:27.635 "serial_number": "00000000000000000000", 00:21:27.635 "model_number": "SPDK bdev Controller", 00:21:27.635 "max_namespaces": 32, 00:21:27.635 "min_cntlid": 1, 00:21:27.635 "max_cntlid": 65519, 00:21:27.635 "ana_reporting": false 00:21:27.635 } 00:21:27.635 }, 00:21:27.635 { 00:21:27.635 "method": "nvmf_subsystem_add_host", 00:21:27.635 "params": { 00:21:27.635 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.635 "host": "nqn.2016-06.io.spdk:host1", 00:21:27.635 "psk": "key0" 00:21:27.635 } 00:21:27.635 }, 00:21:27.635 { 00:21:27.635 "method": "nvmf_subsystem_add_ns", 00:21:27.635 "params": { 00:21:27.635 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.635 "namespace": { 00:21:27.635 "nsid": 1, 00:21:27.635 "bdev_name": "malloc0", 00:21:27.635 "nguid": "1F2CAC2AADE84B51B5E3D4542F2EC9B5", 00:21:27.635 "uuid": "1f2cac2a-ade8-4b51-b5e3-d4542f2ec9b5", 00:21:27.635 "no_auto_visible": false 00:21:27.635 } 00:21:27.635 } 00:21:27.635 }, 00:21:27.635 { 00:21:27.635 "method": "nvmf_subsystem_add_listener", 00:21:27.635 "params": { 00:21:27.635 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.635 "listen_address": { 00:21:27.635 "trtype": "TCP", 00:21:27.635 "adrfam": "IPv4", 00:21:27.635 "traddr": "10.0.0.2", 00:21:27.635 "trsvcid": "4420" 00:21:27.635 }, 00:21:27.635 "secure_channel": true 00:21:27.635 } 00:21:27.635 } 00:21:27.635 ] 00:21:27.635 } 00:21:27.635 ] 00:21:27.635 }' 00:21:27.635 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:27.635 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.635 11:47:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2009815 00:21:27.635 11:47:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:27.635 11:47:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2009815 00:21:27.635 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2009815 ']' 00:21:27.635 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:27.635 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:27.635 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:27.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:27.635 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:27.635 11:47:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.894 [2024-07-15 11:47:55.755032] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:21:27.894 [2024-07-15 11:47:55.755084] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:27.894 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.894 [2024-07-15 11:47:55.828644] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.894 [2024-07-15 11:47:55.901639] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:27.895 [2024-07-15 11:47:55.901676] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:27.895 [2024-07-15 11:47:55.901685] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:27.895 [2024-07-15 11:47:55.901693] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:27.895 [2024-07-15 11:47:55.901700] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:27.895 [2024-07-15 11:47:55.901758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.154 [2024-07-15 11:47:56.111912] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:28.154 [2024-07-15 11:47:56.143946] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:28.154 [2024-07-15 11:47:56.152105] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.767 11:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:28.767 11:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:28.767 11:47:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:28.767 11:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:28.767 11:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.767 11:47:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.767 11:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2009847 00:21:28.767 11:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:28.767 11:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2009847 /var/tmp/bdevperf.sock 00:21:28.767 11:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2009847 ']' 00:21:28.767 11:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:28.767 11:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:28.767 11:47:56 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:21:28.767 "subsystems": [ 00:21:28.767 { 00:21:28.767 "subsystem": "keyring", 00:21:28.767 "config": [ 00:21:28.767 { 00:21:28.767 "method": "keyring_file_add_key", 00:21:28.767 "params": { 00:21:28.767 "name": "key0", 00:21:28.767 "path": "/tmp/tmp.IV7W06ltnk" 00:21:28.767 } 00:21:28.768 } 00:21:28.768 ] 00:21:28.768 }, 00:21:28.768 { 00:21:28.768 "subsystem": "iobuf", 00:21:28.768 "config": [ 00:21:28.768 { 00:21:28.768 "method": "iobuf_set_options", 00:21:28.768 "params": { 00:21:28.768 "small_pool_count": 8192, 00:21:28.768 "large_pool_count": 1024, 00:21:28.768 "small_bufsize": 8192, 00:21:28.768 "large_bufsize": 135168 00:21:28.768 } 00:21:28.768 } 00:21:28.768 ] 00:21:28.768 }, 00:21:28.768 { 00:21:28.768 "subsystem": "sock", 00:21:28.768 "config": [ 00:21:28.768 { 00:21:28.768 "method": "sock_set_default_impl", 00:21:28.768 "params": { 00:21:28.768 "impl_name": "posix" 00:21:28.768 } 00:21:28.768 }, 00:21:28.768 { 00:21:28.768 "method": "sock_impl_set_options", 00:21:28.768 "params": { 00:21:28.768 "impl_name": "ssl", 00:21:28.768 "recv_buf_size": 4096, 00:21:28.768 "send_buf_size": 4096, 00:21:28.768 "enable_recv_pipe": true, 00:21:28.768 "enable_quickack": false, 00:21:28.768 "enable_placement_id": 0, 00:21:28.768 "enable_zerocopy_send_server": true, 00:21:28.768 "enable_zerocopy_send_client": false, 00:21:28.768 "zerocopy_threshold": 0, 00:21:28.768 "tls_version": 0, 00:21:28.768 "enable_ktls": false 00:21:28.768 } 00:21:28.768 }, 00:21:28.768 { 00:21:28.768 "method": "sock_impl_set_options", 00:21:28.768 "params": { 00:21:28.768 "impl_name": "posix", 00:21:28.768 "recv_buf_size": 2097152, 00:21:28.768 "send_buf_size": 2097152, 00:21:28.768 "enable_recv_pipe": true, 00:21:28.768 "enable_quickack": false, 00:21:28.768 "enable_placement_id": 0, 00:21:28.768 "enable_zerocopy_send_server": true, 00:21:28.768 "enable_zerocopy_send_client": false, 00:21:28.768 "zerocopy_threshold": 0, 00:21:28.768 "tls_version": 0, 00:21:28.768 "enable_ktls": false 00:21:28.768 } 00:21:28.768 } 00:21:28.768 ] 00:21:28.768 }, 00:21:28.768 { 00:21:28.768 "subsystem": "vmd", 00:21:28.768 "config": [] 00:21:28.768 }, 00:21:28.768 { 00:21:28.768 "subsystem": "accel", 00:21:28.768 "config": [ 00:21:28.768 { 00:21:28.768 "method": "accel_set_options", 00:21:28.768 "params": { 00:21:28.768 "small_cache_size": 128, 00:21:28.768 "large_cache_size": 16, 00:21:28.768 "task_count": 2048, 00:21:28.768 "sequence_count": 2048, 00:21:28.768 "buf_count": 2048 00:21:28.768 } 00:21:28.768 } 00:21:28.768 ] 00:21:28.768 }, 00:21:28.768 { 00:21:28.768 "subsystem": "bdev", 00:21:28.768 "config": [ 00:21:28.768 { 00:21:28.768 "method": "bdev_set_options", 00:21:28.768 "params": { 00:21:28.768 "bdev_io_pool_size": 65535, 00:21:28.768 "bdev_io_cache_size": 256, 00:21:28.768 "bdev_auto_examine": true, 00:21:28.768 "iobuf_small_cache_size": 128, 00:21:28.768 "iobuf_large_cache_size": 16 00:21:28.768 } 00:21:28.768 }, 00:21:28.768 { 00:21:28.768 "method": "bdev_raid_set_options", 00:21:28.768 "params": { 00:21:28.768 "process_window_size_kb": 1024 00:21:28.768 } 00:21:28.768 }, 00:21:28.768 { 00:21:28.768 "method": "bdev_iscsi_set_options", 00:21:28.768 "params": { 00:21:28.768 "timeout_sec": 30 00:21:28.768 } 00:21:28.768 }, 00:21:28.768 { 00:21:28.768 "method": "bdev_nvme_set_options", 00:21:28.768 "params": { 00:21:28.768 "action_on_timeout": "none", 00:21:28.768 "timeout_us": 0, 00:21:28.768 "timeout_admin_us": 0, 00:21:28.768 "keep_alive_timeout_ms": 10000, 00:21:28.768 "arbitration_burst": 0, 00:21:28.768 "low_priority_weight": 0, 00:21:28.768 "medium_priority_weight": 0, 00:21:28.768 "high_priority_weight": 0, 00:21:28.768 "nvme_adminq_poll_period_us": 10000, 00:21:28.768 "nvme_ioq_poll_period_us": 0, 00:21:28.768 "io_queue_requests": 512, 00:21:28.768 "delay_cmd_submit": true, 00:21:28.768 "transport_retry_count": 4, 00:21:28.768 "bdev_retry_count": 3, 00:21:28.768 "transport_ack_timeout": 0, 00:21:28.768 "ctrlr_loss_timeout_sec": 0, 00:21:28.768 "reconnect_delay_sec": 0, 00:21:28.768 "fast_io_fail_timeout_sec": 0, 00:21:28.768 "disable_auto_failback": false, 00:21:28.768 "generate_uuids": false, 00:21:28.768 "transport_tos": 0, 00:21:28.768 "nvme_error_stat": false, 00:21:28.768 "rdma_srq_size": 0, 00:21:28.768 "io_path_stat": false, 00:21:28.768 "allow_accel_sequence": false, 00:21:28.768 "rdma_max_cq_size": 0, 00:21:28.768 "rdma_cm_event_timeout_ms": 0, 00:21:28.768 "dhchap_digests": [ 00:21:28.768 "sha256", 00:21:28.768 "sha384", 00:21:28.768 "sha512" 00:21:28.768 ], 00:21:28.768 "dhchap_dhgroups": [ 00:21:28.768 "null", 00:21:28.768 "ffdhe2048", 00:21:28.768 "ffdhe3072", 00:21:28.768 "ffdhe4096", 00:21:28.768 "ffdhe6144", 00:21:28.768 "ffdhe8192" 00:21:28.768 ] 00:21:28.768 } 00:21:28.768 }, 00:21:28.768 { 00:21:28.768 "method": "bdev_nvme_attach_controller", 00:21:28.768 "params": { 00:21:28.768 "name": "nvme0", 00:21:28.768 "trtype": "TCP", 00:21:28.768 "adrfam": "IPv4", 00:21:28.768 "traddr": "10.0.0.2", 00:21:28.768 "trsvcid": "4420", 00:21:28.768 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.768 "prchk_reftag": false, 00:21:28.768 "prchk_guard": false, 00:21:28.768 "ctrlr_loss_timeout_sec": 0, 00:21:28.768 "reconnect_delay_sec": 0, 00:21:28.768 "fast_io_fail_timeout_sec": 0, 00:21:28.768 "psk": "key0", 00:21:28.768 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:28.768 "hdgst": false, 00:21:28.768 "ddgst": false 00:21:28.768 } 00:21:28.768 }, 00:21:28.768 { 00:21:28.768 "method": "bdev_nvme_set_hotplug", 00:21:28.768 "params": { 00:21:28.768 "period_us": 100000, 00:21:28.768 "enable": false 00:21:28.768 } 00:21:28.768 }, 00:21:28.768 { 00:21:28.768 "method": "bdev_enable_histogram", 00:21:28.768 "params": { 00:21:28.768 "name": "nvme0n1", 00:21:28.768 "enable": true 00:21:28.768 } 00:21:28.768 }, 00:21:28.768 { 00:21:28.768 "method": "bdev_wait_for_examine" 00:21:28.768 } 00:21:28.768 ] 00:21:28.768 }, 00:21:28.768 { 00:21:28.768 "subsystem": "nbd", 00:21:28.768 "config": [] 00:21:28.768 } 00:21:28.768 ] 00:21:28.768 }' 00:21:28.768 11:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:28.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:28.768 11:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:28.768 11:47:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.768 [2024-07-15 11:47:56.622385] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:21:28.768 [2024-07-15 11:47:56.622437] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2009847 ] 00:21:28.768 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.768 [2024-07-15 11:47:56.692348] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.768 [2024-07-15 11:47:56.761635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.049 [2024-07-15 11:47:56.911846] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:29.617 11:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.617 11:47:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:29.617 11:47:57 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:29.617 11:47:57 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:21:29.617 11:47:57 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.617 11:47:57 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:29.617 Running I/O for 1 seconds... 00:21:30.995 00:21:30.995 Latency(us) 00:21:30.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.995 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:30.995 Verification LBA range: start 0x0 length 0x2000 00:21:30.995 nvme0n1 : 1.02 4326.48 16.90 0.00 0.00 29244.94 4613.73 66689.43 00:21:30.995 =================================================================================================================== 00:21:30.995 Total : 4326.48 16.90 0.00 0.00 29244.94 4613.73 66689.43 00:21:30.995 0 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:30.995 nvmf_trace.0 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2009847 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2009847 ']' 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2009847 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2009847 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2009847' 00:21:30.995 killing process with pid 2009847 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2009847 00:21:30.995 Received shutdown signal, test time was about 1.000000 seconds 00:21:30.995 00:21:30.995 Latency(us) 00:21:30.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.995 =================================================================================================================== 00:21:30.995 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:30.995 11:47:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2009847 00:21:30.995 11:47:59 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:30.995 11:47:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:30.995 11:47:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:30.995 11:47:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:30.995 11:47:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:30.995 11:47:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:30.995 11:47:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:30.995 rmmod nvme_tcp 00:21:30.995 rmmod nvme_fabrics 00:21:30.996 rmmod nvme_keyring 00:21:30.996 11:47:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:30.996 11:47:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:30.996 11:47:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:30.996 11:47:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2009815 ']' 00:21:30.996 11:47:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2009815 00:21:30.996 11:47:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2009815 ']' 00:21:30.996 11:47:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2009815 00:21:31.255 11:47:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:31.255 11:47:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:31.255 11:47:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2009815 00:21:31.255 11:47:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:31.255 11:47:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:31.255 11:47:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2009815' 00:21:31.255 killing process with pid 2009815 00:21:31.255 11:47:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2009815 00:21:31.255 11:47:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2009815 00:21:31.255 11:47:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:31.255 11:47:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:31.255 11:47:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:31.255 11:47:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:31.255 11:47:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:31.255 11:47:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.255 11:47:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:31.255 11:47:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.838 11:48:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:33.838 11:48:01 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.3bb6sTRJSn /tmp/tmp.wMrBJhAve2 /tmp/tmp.IV7W06ltnk 00:21:33.838 00:21:33.838 real 1m26.432s 00:21:33.838 user 2m6.173s 00:21:33.838 sys 0m35.463s 00:21:33.838 11:48:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:33.838 11:48:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:33.838 ************************************ 00:21:33.838 END TEST nvmf_tls 00:21:33.838 ************************************ 00:21:33.838 11:48:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:33.838 11:48:01 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:33.838 11:48:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:33.838 11:48:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:33.838 11:48:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:33.838 ************************************ 00:21:33.838 START TEST nvmf_fips 00:21:33.838 ************************************ 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:33.838 * Looking for test storage... 00:21:33.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:21:33.838 Error setting digest 00:21:33.838 00D202ACB37F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:33.838 00D202ACB37F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:33.838 11:48:01 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:33.839 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:33.839 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.839 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:33.839 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:33.839 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:33.839 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.839 11:48:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.839 11:48:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.839 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:33.839 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:33.839 11:48:01 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:33.839 11:48:01 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:21:40.413 Found 0000:af:00.0 (0x8086 - 0x159b) 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:21:40.413 Found 0000:af:00.1 (0x8086 - 0x159b) 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:21:40.413 Found net devices under 0000:af:00.0: cvl_0_0 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:21:40.413 Found net devices under 0000:af:00.1: cvl_0_1 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.413 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.414 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.414 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:40.414 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.414 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.414 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:40.414 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.414 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.414 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:40.414 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:40.414 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.414 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.414 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.414 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.414 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:40.414 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:40.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:21:40.674 00:21:40.674 --- 10.0.0.2 ping statistics --- 00:21:40.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.674 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:21:40.674 00:21:40.674 --- 10.0.0.1 ping statistics --- 00:21:40.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.674 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2014636 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2014636 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2014636 ']' 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:40.674 11:48:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:40.674 [2024-07-15 11:48:08.764769] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:21:40.674 [2024-07-15 11:48:08.764819] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.933 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.933 [2024-07-15 11:48:08.837186] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.933 [2024-07-15 11:48:08.908808] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.933 [2024-07-15 11:48:08.908851] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.933 [2024-07-15 11:48:08.908861] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.933 [2024-07-15 11:48:08.908870] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.933 [2024-07-15 11:48:08.908877] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.933 [2024-07-15 11:48:08.908909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.502 11:48:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:41.502 11:48:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:41.502 11:48:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:41.502 11:48:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:41.502 11:48:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:41.502 11:48:09 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.502 11:48:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:41.502 11:48:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:41.502 11:48:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:41.502 11:48:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:41.502 11:48:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:41.502 11:48:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:41.502 11:48:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:41.502 11:48:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:41.760 [2024-07-15 11:48:09.746147] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.760 [2024-07-15 11:48:09.762146] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:41.761 [2024-07-15 11:48:09.762340] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.761 [2024-07-15 11:48:09.790703] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:41.761 malloc0 00:21:41.761 11:48:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:41.761 11:48:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2014914 00:21:41.761 11:48:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:41.761 11:48:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2014914 /var/tmp/bdevperf.sock 00:21:41.761 11:48:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2014914 ']' 00:21:41.761 11:48:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:41.761 11:48:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:41.761 11:48:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:41.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:41.761 11:48:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:41.761 11:48:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:42.019 [2024-07-15 11:48:09.883204] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:21:42.019 [2024-07-15 11:48:09.883255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2014914 ] 00:21:42.019 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.019 [2024-07-15 11:48:09.947638] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.019 [2024-07-15 11:48:10.023376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.588 11:48:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.588 11:48:10 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:21:42.588 11:48:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:42.847 [2024-07-15 11:48:10.811633] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:42.847 [2024-07-15 11:48:10.811717] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:42.847 TLSTESTn1 00:21:42.847 11:48:10 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:43.105 Running I/O for 10 seconds... 00:21:53.087 00:21:53.087 Latency(us) 00:21:53.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.087 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:53.087 Verification LBA range: start 0x0 length 0x2000 00:21:53.087 TLSTESTn1 : 10.03 4635.45 18.11 0.00 0.00 27561.62 6606.03 56203.67 00:21:53.087 =================================================================================================================== 00:21:53.087 Total : 4635.45 18.11 0.00 0.00 27561.62 6606.03 56203.67 00:21:53.087 0 00:21:53.087 11:48:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:53.087 11:48:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:53.087 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:21:53.087 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:21:53.087 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:21:53.087 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:53.087 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:21:53.087 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:21:53.087 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:21:53.087 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:53.087 nvmf_trace.0 00:21:53.087 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:21:53.087 11:48:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2014914 00:21:53.087 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2014914 ']' 00:21:53.087 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2014914 00:21:53.087 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:53.087 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:53.087 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2014914 00:21:53.347 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:53.347 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:53.347 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2014914' 00:21:53.347 killing process with pid 2014914 00:21:53.347 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2014914 00:21:53.347 Received shutdown signal, test time was about 10.000000 seconds 00:21:53.347 00:21:53.347 Latency(us) 00:21:53.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.347 =================================================================================================================== 00:21:53.347 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:53.347 [2024-07-15 11:48:21.195517] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:53.347 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2014914 00:21:53.347 11:48:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:53.347 11:48:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:53.347 11:48:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:21:53.347 11:48:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:53.347 11:48:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:21:53.347 11:48:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:53.347 11:48:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:53.347 rmmod nvme_tcp 00:21:53.347 rmmod nvme_fabrics 00:21:53.347 rmmod nvme_keyring 00:21:53.347 11:48:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:53.347 11:48:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:21:53.347 11:48:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:21:53.347 11:48:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2014636 ']' 00:21:53.347 11:48:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2014636 00:21:53.347 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2014636 ']' 00:21:53.347 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2014636 00:21:53.347 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:21:53.606 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:53.606 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2014636 00:21:53.606 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:53.606 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:53.606 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2014636' 00:21:53.606 killing process with pid 2014636 00:21:53.606 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2014636 00:21:53.606 [2024-07-15 11:48:21.507724] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:53.606 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2014636 00:21:53.606 11:48:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:53.606 11:48:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:53.606 11:48:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:53.607 11:48:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:53.607 11:48:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:53.607 11:48:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.607 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:53.607 11:48:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.143 11:48:23 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:56.143 11:48:23 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:56.143 00:21:56.143 real 0m22.268s 00:21:56.143 user 0m21.814s 00:21:56.143 sys 0m11.355s 00:21:56.143 11:48:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:56.143 11:48:23 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:56.143 ************************************ 00:21:56.143 END TEST nvmf_fips 00:21:56.143 ************************************ 00:21:56.143 11:48:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:56.143 11:48:23 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:21:56.143 11:48:23 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:21:56.143 11:48:23 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:21:56.143 11:48:23 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:21:56.143 11:48:23 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:21:56.143 11:48:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:02.749 11:48:30 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.749 11:48:30 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:22:02.749 11:48:30 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:02.749 11:48:30 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:02.749 11:48:30 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:02.749 11:48:30 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:02.749 11:48:30 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:02.749 11:48:30 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:22:02.749 11:48:30 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:02.750 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:02.750 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:02.750 Found net devices under 0000:af:00.0: cvl_0_0 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:02.750 Found net devices under 0000:af:00.1: cvl_0_1 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:22:02.750 11:48:30 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:02.750 11:48:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:02.750 11:48:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:02.750 11:48:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:02.750 ************************************ 00:22:02.750 START TEST nvmf_perf_adq 00:22:02.750 ************************************ 00:22:02.750 11:48:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:02.750 * Looking for test storage... 00:22:02.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:02.750 11:48:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.750 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:02.750 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.750 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.750 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.750 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.750 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.750 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.750 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.750 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.750 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.750 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.750 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:02.750 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:02.750 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.750 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.750 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:02.750 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.751 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:02.751 11:48:30 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.751 11:48:30 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.751 11:48:30 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.751 11:48:30 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.751 11:48:30 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.751 11:48:30 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.751 11:48:30 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:02.751 11:48:30 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.751 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:02.751 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:02.751 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:02.751 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.751 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.751 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.751 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:02.751 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:02.751 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:02.751 11:48:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:02.751 11:48:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:02.751 11:48:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:09.422 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:09.422 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:09.422 Found net devices under 0000:af:00.0: cvl_0_0 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:09.422 Found net devices under 0000:af:00.1: cvl_0_1 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:09.422 11:48:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:10.801 11:48:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:12.707 11:48:40 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.983 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:17.984 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:17.984 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:17.984 Found net devices under 0000:af:00.0: cvl_0_0 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:17.984 Found net devices under 0000:af:00.1: cvl_0_1 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:17.984 11:48:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.984 11:48:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.984 11:48:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.984 11:48:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:17.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:22:17.984 00:22:17.984 --- 10.0.0.2 ping statistics --- 00:22:17.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.984 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:22:17.984 11:48:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:18.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:18.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:22:18.244 00:22:18.244 --- 10.0.0.1 ping statistics --- 00:22:18.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.244 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:22:18.244 11:48:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:18.244 11:48:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:18.244 11:48:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:18.244 11:48:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:18.244 11:48:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:18.244 11:48:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:18.244 11:48:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:18.244 11:48:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:18.244 11:48:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:18.244 11:48:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:18.244 11:48:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:18.244 11:48:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:18.244 11:48:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.244 11:48:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2025161 00:22:18.244 11:48:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2025161 00:22:18.244 11:48:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2025161 ']' 00:22:18.244 11:48:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.244 11:48:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:18.244 11:48:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.244 11:48:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:18.244 11:48:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:18.244 11:48:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:18.244 [2024-07-15 11:48:46.187841] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:22:18.244 [2024-07-15 11:48:46.187893] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.244 EAL: No free 2048 kB hugepages reported on node 1 00:22:18.244 [2024-07-15 11:48:46.262224] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:18.244 [2024-07-15 11:48:46.338639] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.244 [2024-07-15 11:48:46.338676] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.244 [2024-07-15 11:48:46.338686] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.244 [2024-07-15 11:48:46.338695] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.244 [2024-07-15 11:48:46.338702] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.244 [2024-07-15 11:48:46.338746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.244 [2024-07-15 11:48:46.338849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:18.244 [2024-07-15 11:48:46.338903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:18.244 [2024-07-15 11:48:46.338905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:19.181 11:48:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:19.181 11:48:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:19.181 11:48:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:19.181 11:48:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:19.181 11:48:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.181 11:48:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:19.181 11:48:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:19.181 11:48:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:19.181 11:48:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:19.181 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.181 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.181 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.181 11:48:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:19.181 11:48:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:19.181 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.182 [2024-07-15 11:48:47.165457] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.182 Malloc1 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.182 [2024-07-15 11:48:47.224138] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2025358 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:19.182 11:48:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:19.182 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.720 11:48:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:21.720 11:48:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.720 11:48:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:21.720 11:48:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.720 11:48:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:21.720 "tick_rate": 2500000000, 00:22:21.720 "poll_groups": [ 00:22:21.720 { 00:22:21.720 "name": "nvmf_tgt_poll_group_000", 00:22:21.720 "admin_qpairs": 1, 00:22:21.720 "io_qpairs": 1, 00:22:21.720 "current_admin_qpairs": 1, 00:22:21.720 "current_io_qpairs": 1, 00:22:21.720 "pending_bdev_io": 0, 00:22:21.720 "completed_nvme_io": 21271, 00:22:21.720 "transports": [ 00:22:21.720 { 00:22:21.720 "trtype": "TCP" 00:22:21.720 } 00:22:21.720 ] 00:22:21.720 }, 00:22:21.720 { 00:22:21.720 "name": "nvmf_tgt_poll_group_001", 00:22:21.720 "admin_qpairs": 0, 00:22:21.720 "io_qpairs": 1, 00:22:21.720 "current_admin_qpairs": 0, 00:22:21.720 "current_io_qpairs": 1, 00:22:21.720 "pending_bdev_io": 0, 00:22:21.720 "completed_nvme_io": 21230, 00:22:21.720 "transports": [ 00:22:21.720 { 00:22:21.720 "trtype": "TCP" 00:22:21.720 } 00:22:21.720 ] 00:22:21.720 }, 00:22:21.720 { 00:22:21.720 "name": "nvmf_tgt_poll_group_002", 00:22:21.720 "admin_qpairs": 0, 00:22:21.720 "io_qpairs": 1, 00:22:21.720 "current_admin_qpairs": 0, 00:22:21.720 "current_io_qpairs": 1, 00:22:21.720 "pending_bdev_io": 0, 00:22:21.720 "completed_nvme_io": 21623, 00:22:21.720 "transports": [ 00:22:21.720 { 00:22:21.720 "trtype": "TCP" 00:22:21.720 } 00:22:21.720 ] 00:22:21.720 }, 00:22:21.720 { 00:22:21.720 "name": "nvmf_tgt_poll_group_003", 00:22:21.720 "admin_qpairs": 0, 00:22:21.720 "io_qpairs": 1, 00:22:21.720 "current_admin_qpairs": 0, 00:22:21.720 "current_io_qpairs": 1, 00:22:21.720 "pending_bdev_io": 0, 00:22:21.720 "completed_nvme_io": 21285, 00:22:21.720 "transports": [ 00:22:21.720 { 00:22:21.720 "trtype": "TCP" 00:22:21.720 } 00:22:21.720 ] 00:22:21.720 } 00:22:21.720 ] 00:22:21.720 }' 00:22:21.720 11:48:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:21.720 11:48:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:21.720 11:48:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:21.720 11:48:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:21.720 11:48:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2025358 00:22:29.846 Initializing NVMe Controllers 00:22:29.846 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:29.846 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:29.846 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:29.846 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:29.846 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:29.846 Initialization complete. Launching workers. 00:22:29.846 ======================================================== 00:22:29.846 Latency(us) 00:22:29.846 Device Information : IOPS MiB/s Average min max 00:22:29.846 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11204.10 43.77 5713.11 1808.10 10073.92 00:22:29.846 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11195.80 43.73 5716.17 2159.91 10321.65 00:22:29.846 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11431.00 44.65 5599.49 1794.00 9436.24 00:22:29.846 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11306.20 44.16 5661.01 2317.27 10448.15 00:22:29.846 ======================================================== 00:22:29.846 Total : 45137.10 176.32 5672.04 1794.00 10448.15 00:22:29.846 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:29.846 rmmod nvme_tcp 00:22:29.846 rmmod nvme_fabrics 00:22:29.846 rmmod nvme_keyring 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2025161 ']' 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2025161 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2025161 ']' 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2025161 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2025161 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2025161' 00:22:29.846 killing process with pid 2025161 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2025161 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2025161 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:29.846 11:48:57 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:31.752 11:48:59 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:31.752 11:48:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:31.752 11:48:59 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:33.131 11:49:01 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:35.666 11:49:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:41.019 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:41.019 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:41.019 Found net devices under 0000:af:00.0: cvl_0_0 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:41.019 Found net devices under 0000:af:00.1: cvl_0_1 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:41.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:41.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:22:41.019 00:22:41.019 --- 10.0.0.2 ping statistics --- 00:22:41.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.019 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:22:41.019 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:41.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:41.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:22:41.020 00:22:41.020 --- 10.0.0.1 ping statistics --- 00:22:41.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.020 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:41.020 net.core.busy_poll = 1 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:41.020 net.core.busy_read = 1 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2029418 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2029418 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2029418 ']' 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:41.020 11:49:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.020 [2024-07-15 11:49:09.004450] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:22:41.020 [2024-07-15 11:49:09.004502] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.020 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.020 [2024-07-15 11:49:09.078302] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:41.277 [2024-07-15 11:49:09.152728] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.277 [2024-07-15 11:49:09.152768] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.277 [2024-07-15 11:49:09.152777] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.277 [2024-07-15 11:49:09.152786] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.277 [2024-07-15 11:49:09.152793] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.277 [2024-07-15 11:49:09.152837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.277 [2024-07-15 11:49:09.152948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:41.277 [2024-07-15 11:49:09.153031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:41.277 [2024-07-15 11:49:09.153033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.843 11:49:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:41.843 11:49:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:41.843 11:49:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:41.843 11:49:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:41.843 11:49:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.843 11:49:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.843 11:49:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:41.843 11:49:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:41.843 11:49:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:41.843 11:49:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.843 11:49:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.843 11:49:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.843 11:49:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:41.843 11:49:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:41.843 11:49:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.843 11:49:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.843 11:49:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.843 11:49:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:41.843 11:49:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.843 11:49:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:42.102 11:49:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.102 11:49:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:42.102 11:49:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.102 11:49:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:42.102 [2024-07-15 11:49:09.989664] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:42.102 11:49:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.102 11:49:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:42.102 11:49:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.102 11:49:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:42.102 Malloc1 00:22:42.102 11:49:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.102 11:49:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:42.102 11:49:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.102 11:49:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:42.102 11:49:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.102 11:49:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:42.102 11:49:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.102 11:49:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:42.102 11:49:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.102 11:49:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:42.102 11:49:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.102 11:49:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:42.102 [2024-07-15 11:49:10.040657] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:42.102 11:49:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.102 11:49:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2029544 00:22:42.102 11:49:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:42.102 11:49:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:42.102 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.008 11:49:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:44.008 11:49:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.008 11:49:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:44.008 11:49:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.008 11:49:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:44.008 "tick_rate": 2500000000, 00:22:44.008 "poll_groups": [ 00:22:44.008 { 00:22:44.008 "name": "nvmf_tgt_poll_group_000", 00:22:44.008 "admin_qpairs": 1, 00:22:44.008 "io_qpairs": 4, 00:22:44.008 "current_admin_qpairs": 1, 00:22:44.008 "current_io_qpairs": 4, 00:22:44.008 "pending_bdev_io": 0, 00:22:44.008 "completed_nvme_io": 47459, 00:22:44.008 "transports": [ 00:22:44.008 { 00:22:44.008 "trtype": "TCP" 00:22:44.008 } 00:22:44.008 ] 00:22:44.008 }, 00:22:44.008 { 00:22:44.008 "name": "nvmf_tgt_poll_group_001", 00:22:44.008 "admin_qpairs": 0, 00:22:44.008 "io_qpairs": 0, 00:22:44.008 "current_admin_qpairs": 0, 00:22:44.008 "current_io_qpairs": 0, 00:22:44.008 "pending_bdev_io": 0, 00:22:44.008 "completed_nvme_io": 0, 00:22:44.008 "transports": [ 00:22:44.008 { 00:22:44.008 "trtype": "TCP" 00:22:44.008 } 00:22:44.008 ] 00:22:44.008 }, 00:22:44.008 { 00:22:44.008 "name": "nvmf_tgt_poll_group_002", 00:22:44.008 "admin_qpairs": 0, 00:22:44.008 "io_qpairs": 0, 00:22:44.008 "current_admin_qpairs": 0, 00:22:44.008 "current_io_qpairs": 0, 00:22:44.008 "pending_bdev_io": 0, 00:22:44.008 "completed_nvme_io": 0, 00:22:44.008 "transports": [ 00:22:44.008 { 00:22:44.008 "trtype": "TCP" 00:22:44.008 } 00:22:44.008 ] 00:22:44.008 }, 00:22:44.008 { 00:22:44.008 "name": "nvmf_tgt_poll_group_003", 00:22:44.008 "admin_qpairs": 0, 00:22:44.008 "io_qpairs": 0, 00:22:44.008 "current_admin_qpairs": 0, 00:22:44.008 "current_io_qpairs": 0, 00:22:44.008 "pending_bdev_io": 0, 00:22:44.008 "completed_nvme_io": 0, 00:22:44.008 "transports": [ 00:22:44.008 { 00:22:44.008 "trtype": "TCP" 00:22:44.008 } 00:22:44.008 ] 00:22:44.008 } 00:22:44.008 ] 00:22:44.008 }' 00:22:44.008 11:49:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:44.008 11:49:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:44.267 11:49:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:22:44.267 11:49:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:22:44.267 11:49:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2029544 00:22:52.415 Initializing NVMe Controllers 00:22:52.415 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:52.415 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:52.415 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:52.415 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:52.415 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:52.415 Initialization complete. Launching workers. 00:22:52.415 ======================================================== 00:22:52.415 Latency(us) 00:22:52.415 Device Information : IOPS MiB/s Average min max 00:22:52.415 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6283.90 24.55 10187.19 1317.60 55235.36 00:22:52.415 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6206.60 24.24 10315.33 1630.50 56325.15 00:22:52.415 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6582.50 25.71 9732.81 1287.40 55882.82 00:22:52.415 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5641.90 22.04 11343.72 1641.91 57515.16 00:22:52.415 ======================================================== 00:22:52.415 Total : 24714.90 96.54 10362.36 1287.40 57515.16 00:22:52.415 00:22:52.415 11:49:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:52.415 11:49:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:52.415 11:49:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:52.415 11:49:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:52.415 11:49:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:52.415 11:49:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:52.415 11:49:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:52.415 rmmod nvme_tcp 00:22:52.415 rmmod nvme_fabrics 00:22:52.415 rmmod nvme_keyring 00:22:52.415 11:49:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:52.415 11:49:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:52.415 11:49:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:52.415 11:49:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2029418 ']' 00:22:52.415 11:49:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2029418 00:22:52.415 11:49:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2029418 ']' 00:22:52.415 11:49:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2029418 00:22:52.415 11:49:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:22:52.415 11:49:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:52.415 11:49:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2029418 00:22:52.415 11:49:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:52.415 11:49:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:52.415 11:49:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2029418' 00:22:52.415 killing process with pid 2029418 00:22:52.415 11:49:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2029418 00:22:52.415 11:49:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2029418 00:22:52.675 11:49:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:52.675 11:49:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:52.675 11:49:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:52.675 11:49:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:52.675 11:49:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:52.675 11:49:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.675 11:49:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.675 11:49:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.578 11:49:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:54.578 11:49:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:54.578 00:22:54.578 real 0m52.098s 00:22:54.578 user 2m46.673s 00:22:54.578 sys 0m13.866s 00:22:54.578 11:49:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:54.578 11:49:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.578 ************************************ 00:22:54.578 END TEST nvmf_perf_adq 00:22:54.578 ************************************ 00:22:54.578 11:49:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:54.579 11:49:22 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:54.579 11:49:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:54.579 11:49:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:54.579 11:49:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:54.837 ************************************ 00:22:54.837 START TEST nvmf_shutdown 00:22:54.837 ************************************ 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:54.837 * Looking for test storage... 00:22:54.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:54.837 ************************************ 00:22:54.837 START TEST nvmf_shutdown_tc1 00:22:54.837 ************************************ 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.837 11:49:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:54.838 11:49:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:54.838 11:49:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:54.838 11:49:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.838 11:49:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.838 11:49:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.838 11:49:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:54.838 11:49:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:54.838 11:49:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:54.838 11:49:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:01.391 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:01.651 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:01.651 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:01.651 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:01.651 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.651 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:01.651 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:01.651 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.651 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.651 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.651 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.651 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.651 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:01.651 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:01.651 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:01.651 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:01.651 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:01.651 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.651 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:01.652 Found net devices under 0000:af:00.0: cvl_0_0 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:01.652 Found net devices under 0000:af:00.1: cvl_0_1 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:01.652 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:01.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:23:01.912 00:23:01.912 --- 10.0.0.2 ping statistics --- 00:23:01.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.912 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:23:01.912 00:23:01.912 --- 10.0.0.1 ping statistics --- 00:23:01.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.912 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2035068 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2035068 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2035068 ']' 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:01.912 11:49:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:01.912 [2024-07-15 11:49:29.916889] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:23:01.912 [2024-07-15 11:49:29.916948] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.912 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.912 [2024-07-15 11:49:29.991596] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:02.172 [2024-07-15 11:49:30.078075] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.172 [2024-07-15 11:49:30.078114] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.172 [2024-07-15 11:49:30.078124] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.172 [2024-07-15 11:49:30.078132] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.172 [2024-07-15 11:49:30.078155] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.172 [2024-07-15 11:49:30.078259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.172 [2024-07-15 11:49:30.078352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:02.172 [2024-07-15 11:49:30.078462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.172 [2024-07-15 11:49:30.078463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:02.740 [2024-07-15 11:49:30.778711] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:02.740 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:02.999 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:02.999 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:02.999 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.999 11:49:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:02.999 Malloc1 00:23:02.999 [2024-07-15 11:49:30.889634] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.999 Malloc2 00:23:02.999 Malloc3 00:23:02.999 Malloc4 00:23:02.999 Malloc5 00:23:02.999 Malloc6 00:23:03.258 Malloc7 00:23:03.258 Malloc8 00:23:03.258 Malloc9 00:23:03.258 Malloc10 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2035373 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2035373 /var/tmp/bdevperf.sock 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2035373 ']' 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:03.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.258 { 00:23:03.258 "params": { 00:23:03.258 "name": "Nvme$subsystem", 00:23:03.258 "trtype": "$TEST_TRANSPORT", 00:23:03.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.258 "adrfam": "ipv4", 00:23:03.258 "trsvcid": "$NVMF_PORT", 00:23:03.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.258 "hdgst": ${hdgst:-false}, 00:23:03.258 "ddgst": ${ddgst:-false} 00:23:03.258 }, 00:23:03.258 "method": "bdev_nvme_attach_controller" 00:23:03.258 } 00:23:03.258 EOF 00:23:03.258 )") 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.258 { 00:23:03.258 "params": { 00:23:03.258 "name": "Nvme$subsystem", 00:23:03.258 "trtype": "$TEST_TRANSPORT", 00:23:03.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.258 "adrfam": "ipv4", 00:23:03.258 "trsvcid": "$NVMF_PORT", 00:23:03.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.258 "hdgst": ${hdgst:-false}, 00:23:03.258 "ddgst": ${ddgst:-false} 00:23:03.258 }, 00:23:03.258 "method": "bdev_nvme_attach_controller" 00:23:03.258 } 00:23:03.258 EOF 00:23:03.258 )") 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.258 { 00:23:03.258 "params": { 00:23:03.258 "name": "Nvme$subsystem", 00:23:03.258 "trtype": "$TEST_TRANSPORT", 00:23:03.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.258 "adrfam": "ipv4", 00:23:03.258 "trsvcid": "$NVMF_PORT", 00:23:03.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.258 "hdgst": ${hdgst:-false}, 00:23:03.258 "ddgst": ${ddgst:-false} 00:23:03.258 }, 00:23:03.258 "method": "bdev_nvme_attach_controller" 00:23:03.258 } 00:23:03.258 EOF 00:23:03.258 )") 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.258 { 00:23:03.258 "params": { 00:23:03.258 "name": "Nvme$subsystem", 00:23:03.258 "trtype": "$TEST_TRANSPORT", 00:23:03.258 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.258 "adrfam": "ipv4", 00:23:03.258 "trsvcid": "$NVMF_PORT", 00:23:03.258 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.258 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.258 "hdgst": ${hdgst:-false}, 00:23:03.258 "ddgst": ${ddgst:-false} 00:23:03.258 }, 00:23:03.258 "method": "bdev_nvme_attach_controller" 00:23:03.258 } 00:23:03.258 EOF 00:23:03.258 )") 00:23:03.258 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.518 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.518 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.518 { 00:23:03.518 "params": { 00:23:03.518 "name": "Nvme$subsystem", 00:23:03.518 "trtype": "$TEST_TRANSPORT", 00:23:03.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.518 "adrfam": "ipv4", 00:23:03.518 "trsvcid": "$NVMF_PORT", 00:23:03.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.518 "hdgst": ${hdgst:-false}, 00:23:03.518 "ddgst": ${ddgst:-false} 00:23:03.518 }, 00:23:03.518 "method": "bdev_nvme_attach_controller" 00:23:03.518 } 00:23:03.518 EOF 00:23:03.518 )") 00:23:03.518 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.518 [2024-07-15 11:49:31.370815] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:23:03.518 [2024-07-15 11:49:31.370874] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:03.518 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.518 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.518 { 00:23:03.518 "params": { 00:23:03.518 "name": "Nvme$subsystem", 00:23:03.518 "trtype": "$TEST_TRANSPORT", 00:23:03.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.518 "adrfam": "ipv4", 00:23:03.518 "trsvcid": "$NVMF_PORT", 00:23:03.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.518 "hdgst": ${hdgst:-false}, 00:23:03.518 "ddgst": ${ddgst:-false} 00:23:03.518 }, 00:23:03.518 "method": "bdev_nvme_attach_controller" 00:23:03.518 } 00:23:03.518 EOF 00:23:03.518 )") 00:23:03.518 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.518 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.518 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.518 { 00:23:03.518 "params": { 00:23:03.518 "name": "Nvme$subsystem", 00:23:03.518 "trtype": "$TEST_TRANSPORT", 00:23:03.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.518 "adrfam": "ipv4", 00:23:03.518 "trsvcid": "$NVMF_PORT", 00:23:03.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.518 "hdgst": ${hdgst:-false}, 00:23:03.518 "ddgst": ${ddgst:-false} 00:23:03.518 }, 00:23:03.518 "method": "bdev_nvme_attach_controller" 00:23:03.518 } 00:23:03.518 EOF 00:23:03.518 )") 00:23:03.518 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.518 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.518 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.518 { 00:23:03.518 "params": { 00:23:03.518 "name": "Nvme$subsystem", 00:23:03.518 "trtype": "$TEST_TRANSPORT", 00:23:03.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.518 "adrfam": "ipv4", 00:23:03.518 "trsvcid": "$NVMF_PORT", 00:23:03.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.518 "hdgst": ${hdgst:-false}, 00:23:03.518 "ddgst": ${ddgst:-false} 00:23:03.518 }, 00:23:03.518 "method": "bdev_nvme_attach_controller" 00:23:03.518 } 00:23:03.518 EOF 00:23:03.518 )") 00:23:03.518 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.518 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.518 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.518 { 00:23:03.518 "params": { 00:23:03.518 "name": "Nvme$subsystem", 00:23:03.518 "trtype": "$TEST_TRANSPORT", 00:23:03.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.518 "adrfam": "ipv4", 00:23:03.518 "trsvcid": "$NVMF_PORT", 00:23:03.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.518 "hdgst": ${hdgst:-false}, 00:23:03.518 "ddgst": ${ddgst:-false} 00:23:03.518 }, 00:23:03.518 "method": "bdev_nvme_attach_controller" 00:23:03.518 } 00:23:03.518 EOF 00:23:03.518 )") 00:23:03.518 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.518 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.518 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:03.518 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:03.518 { 00:23:03.519 "params": { 00:23:03.519 "name": "Nvme$subsystem", 00:23:03.519 "trtype": "$TEST_TRANSPORT", 00:23:03.519 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:03.519 "adrfam": "ipv4", 00:23:03.519 "trsvcid": "$NVMF_PORT", 00:23:03.519 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:03.519 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:03.519 "hdgst": ${hdgst:-false}, 00:23:03.519 "ddgst": ${ddgst:-false} 00:23:03.519 }, 00:23:03.519 "method": "bdev_nvme_attach_controller" 00:23:03.519 } 00:23:03.519 EOF 00:23:03.519 )") 00:23:03.519 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:03.519 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:03.519 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:03.519 11:49:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:03.519 "params": { 00:23:03.519 "name": "Nvme1", 00:23:03.519 "trtype": "tcp", 00:23:03.519 "traddr": "10.0.0.2", 00:23:03.519 "adrfam": "ipv4", 00:23:03.519 "trsvcid": "4420", 00:23:03.519 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.519 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:03.519 "hdgst": false, 00:23:03.519 "ddgst": false 00:23:03.519 }, 00:23:03.519 "method": "bdev_nvme_attach_controller" 00:23:03.519 },{ 00:23:03.519 "params": { 00:23:03.519 "name": "Nvme2", 00:23:03.519 "trtype": "tcp", 00:23:03.519 "traddr": "10.0.0.2", 00:23:03.519 "adrfam": "ipv4", 00:23:03.519 "trsvcid": "4420", 00:23:03.519 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:03.519 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:03.519 "hdgst": false, 00:23:03.519 "ddgst": false 00:23:03.519 }, 00:23:03.519 "method": "bdev_nvme_attach_controller" 00:23:03.519 },{ 00:23:03.519 "params": { 00:23:03.519 "name": "Nvme3", 00:23:03.519 "trtype": "tcp", 00:23:03.519 "traddr": "10.0.0.2", 00:23:03.519 "adrfam": "ipv4", 00:23:03.519 "trsvcid": "4420", 00:23:03.519 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:03.519 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:03.519 "hdgst": false, 00:23:03.519 "ddgst": false 00:23:03.519 }, 00:23:03.519 "method": "bdev_nvme_attach_controller" 00:23:03.519 },{ 00:23:03.519 "params": { 00:23:03.519 "name": "Nvme4", 00:23:03.519 "trtype": "tcp", 00:23:03.519 "traddr": "10.0.0.2", 00:23:03.519 "adrfam": "ipv4", 00:23:03.519 "trsvcid": "4420", 00:23:03.519 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:03.519 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:03.519 "hdgst": false, 00:23:03.519 "ddgst": false 00:23:03.519 }, 00:23:03.519 "method": "bdev_nvme_attach_controller" 00:23:03.519 },{ 00:23:03.519 "params": { 00:23:03.519 "name": "Nvme5", 00:23:03.519 "trtype": "tcp", 00:23:03.519 "traddr": "10.0.0.2", 00:23:03.519 "adrfam": "ipv4", 00:23:03.519 "trsvcid": "4420", 00:23:03.519 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:03.519 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:03.519 "hdgst": false, 00:23:03.519 "ddgst": false 00:23:03.519 }, 00:23:03.519 "method": "bdev_nvme_attach_controller" 00:23:03.519 },{ 00:23:03.519 "params": { 00:23:03.519 "name": "Nvme6", 00:23:03.519 "trtype": "tcp", 00:23:03.519 "traddr": "10.0.0.2", 00:23:03.519 "adrfam": "ipv4", 00:23:03.519 "trsvcid": "4420", 00:23:03.519 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:03.519 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:03.519 "hdgst": false, 00:23:03.519 "ddgst": false 00:23:03.519 }, 00:23:03.519 "method": "bdev_nvme_attach_controller" 00:23:03.519 },{ 00:23:03.519 "params": { 00:23:03.519 "name": "Nvme7", 00:23:03.519 "trtype": "tcp", 00:23:03.519 "traddr": "10.0.0.2", 00:23:03.519 "adrfam": "ipv4", 00:23:03.519 "trsvcid": "4420", 00:23:03.519 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:03.519 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:03.519 "hdgst": false, 00:23:03.519 "ddgst": false 00:23:03.519 }, 00:23:03.519 "method": "bdev_nvme_attach_controller" 00:23:03.519 },{ 00:23:03.519 "params": { 00:23:03.519 "name": "Nvme8", 00:23:03.519 "trtype": "tcp", 00:23:03.519 "traddr": "10.0.0.2", 00:23:03.519 "adrfam": "ipv4", 00:23:03.519 "trsvcid": "4420", 00:23:03.519 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:03.519 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:03.519 "hdgst": false, 00:23:03.519 "ddgst": false 00:23:03.519 }, 00:23:03.519 "method": "bdev_nvme_attach_controller" 00:23:03.519 },{ 00:23:03.519 "params": { 00:23:03.519 "name": "Nvme9", 00:23:03.519 "trtype": "tcp", 00:23:03.519 "traddr": "10.0.0.2", 00:23:03.519 "adrfam": "ipv4", 00:23:03.519 "trsvcid": "4420", 00:23:03.519 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:03.519 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:03.519 "hdgst": false, 00:23:03.519 "ddgst": false 00:23:03.519 }, 00:23:03.519 "method": "bdev_nvme_attach_controller" 00:23:03.519 },{ 00:23:03.519 "params": { 00:23:03.519 "name": "Nvme10", 00:23:03.519 "trtype": "tcp", 00:23:03.519 "traddr": "10.0.0.2", 00:23:03.519 "adrfam": "ipv4", 00:23:03.519 "trsvcid": "4420", 00:23:03.519 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:03.519 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:03.519 "hdgst": false, 00:23:03.519 "ddgst": false 00:23:03.519 }, 00:23:03.519 "method": "bdev_nvme_attach_controller" 00:23:03.519 }' 00:23:03.519 [2024-07-15 11:49:31.443655] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.519 [2024-07-15 11:49:31.512729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:04.897 11:49:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:04.897 11:49:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:04.897 11:49:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:04.897 11:49:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.898 11:49:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:04.898 11:49:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.898 11:49:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2035373 00:23:04.898 11:49:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:04.898 11:49:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:06.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2035373 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:06.276 11:49:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2035068 00:23:06.276 11:49:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:06.276 11:49:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:06.276 11:49:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:06.276 11:49:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:06.276 11:49:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:06.276 11:49:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:06.276 { 00:23:06.277 "params": { 00:23:06.277 "name": "Nvme$subsystem", 00:23:06.277 "trtype": "$TEST_TRANSPORT", 00:23:06.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.277 "adrfam": "ipv4", 00:23:06.277 "trsvcid": "$NVMF_PORT", 00:23:06.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.277 "hdgst": ${hdgst:-false}, 00:23:06.277 "ddgst": ${ddgst:-false} 00:23:06.277 }, 00:23:06.277 "method": "bdev_nvme_attach_controller" 00:23:06.277 } 00:23:06.277 EOF 00:23:06.277 )") 00:23:06.277 11:49:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:06.277 11:49:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:06.277 11:49:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:06.277 { 00:23:06.277 "params": { 00:23:06.277 "name": "Nvme$subsystem", 00:23:06.277 "trtype": "$TEST_TRANSPORT", 00:23:06.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.277 "adrfam": "ipv4", 00:23:06.277 "trsvcid": "$NVMF_PORT", 00:23:06.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.277 "hdgst": ${hdgst:-false}, 00:23:06.277 "ddgst": ${ddgst:-false} 00:23:06.277 }, 00:23:06.277 "method": "bdev_nvme_attach_controller" 00:23:06.277 } 00:23:06.277 EOF 00:23:06.277 )") 00:23:06.277 11:49:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:06.277 11:49:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:06.277 11:49:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:06.277 { 00:23:06.277 "params": { 00:23:06.277 "name": "Nvme$subsystem", 00:23:06.277 "trtype": "$TEST_TRANSPORT", 00:23:06.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.277 "adrfam": "ipv4", 00:23:06.277 "trsvcid": "$NVMF_PORT", 00:23:06.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.277 "hdgst": ${hdgst:-false}, 00:23:06.277 "ddgst": ${ddgst:-false} 00:23:06.277 }, 00:23:06.277 "method": "bdev_nvme_attach_controller" 00:23:06.277 } 00:23:06.277 EOF 00:23:06.277 )") 00:23:06.277 11:49:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:06.277 11:49:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:06.277 11:49:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:06.277 { 00:23:06.277 "params": { 00:23:06.277 "name": "Nvme$subsystem", 00:23:06.277 "trtype": "$TEST_TRANSPORT", 00:23:06.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.277 "adrfam": "ipv4", 00:23:06.277 "trsvcid": "$NVMF_PORT", 00:23:06.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.277 "hdgst": ${hdgst:-false}, 00:23:06.277 "ddgst": ${ddgst:-false} 00:23:06.277 }, 00:23:06.277 "method": "bdev_nvme_attach_controller" 00:23:06.277 } 00:23:06.277 EOF 00:23:06.277 )") 00:23:06.277 11:49:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:06.277 11:49:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:06.277 11:49:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:06.277 { 00:23:06.277 "params": { 00:23:06.277 "name": "Nvme$subsystem", 00:23:06.277 "trtype": "$TEST_TRANSPORT", 00:23:06.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.277 "adrfam": "ipv4", 00:23:06.277 "trsvcid": "$NVMF_PORT", 00:23:06.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.277 "hdgst": ${hdgst:-false}, 00:23:06.277 "ddgst": ${ddgst:-false} 00:23:06.277 }, 00:23:06.277 "method": "bdev_nvme_attach_controller" 00:23:06.277 } 00:23:06.277 EOF 00:23:06.277 )") 00:23:06.277 11:49:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:06.277 11:49:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:06.277 11:49:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:06.277 { 00:23:06.277 "params": { 00:23:06.277 "name": "Nvme$subsystem", 00:23:06.277 "trtype": "$TEST_TRANSPORT", 00:23:06.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.277 "adrfam": "ipv4", 00:23:06.277 "trsvcid": "$NVMF_PORT", 00:23:06.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.277 "hdgst": ${hdgst:-false}, 00:23:06.277 "ddgst": ${ddgst:-false} 00:23:06.277 }, 00:23:06.277 "method": "bdev_nvme_attach_controller" 00:23:06.277 } 00:23:06.277 EOF 00:23:06.277 )") 00:23:06.277 11:49:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:06.277 [2024-07-15 11:49:34.003966] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:23:06.277 [2024-07-15 11:49:34.004019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2035722 ] 00:23:06.277 11:49:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:06.277 11:49:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:06.277 { 00:23:06.277 "params": { 00:23:06.277 "name": "Nvme$subsystem", 00:23:06.277 "trtype": "$TEST_TRANSPORT", 00:23:06.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.277 "adrfam": "ipv4", 00:23:06.277 "trsvcid": "$NVMF_PORT", 00:23:06.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.277 "hdgst": ${hdgst:-false}, 00:23:06.277 "ddgst": ${ddgst:-false} 00:23:06.277 }, 00:23:06.277 "method": "bdev_nvme_attach_controller" 00:23:06.277 } 00:23:06.277 EOF 00:23:06.277 )") 00:23:06.277 11:49:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:06.277 11:49:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:06.277 11:49:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:06.277 { 00:23:06.277 "params": { 00:23:06.277 "name": "Nvme$subsystem", 00:23:06.277 "trtype": "$TEST_TRANSPORT", 00:23:06.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.277 "adrfam": "ipv4", 00:23:06.277 "trsvcid": "$NVMF_PORT", 00:23:06.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.277 "hdgst": ${hdgst:-false}, 00:23:06.277 "ddgst": ${ddgst:-false} 00:23:06.277 }, 00:23:06.277 "method": "bdev_nvme_attach_controller" 00:23:06.277 } 00:23:06.277 EOF 00:23:06.277 )") 00:23:06.277 11:49:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:06.277 11:49:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:06.277 11:49:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:06.277 { 00:23:06.277 "params": { 00:23:06.277 "name": "Nvme$subsystem", 00:23:06.277 "trtype": "$TEST_TRANSPORT", 00:23:06.277 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.277 "adrfam": "ipv4", 00:23:06.277 "trsvcid": "$NVMF_PORT", 00:23:06.277 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.277 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.277 "hdgst": ${hdgst:-false}, 00:23:06.277 "ddgst": ${ddgst:-false} 00:23:06.277 }, 00:23:06.277 "method": "bdev_nvme_attach_controller" 00:23:06.277 } 00:23:06.277 EOF 00:23:06.278 )") 00:23:06.278 11:49:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:06.278 11:49:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:06.278 11:49:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:06.278 { 00:23:06.278 "params": { 00:23:06.278 "name": "Nvme$subsystem", 00:23:06.278 "trtype": "$TEST_TRANSPORT", 00:23:06.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:06.278 "adrfam": "ipv4", 00:23:06.278 "trsvcid": "$NVMF_PORT", 00:23:06.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:06.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:06.278 "hdgst": ${hdgst:-false}, 00:23:06.278 "ddgst": ${ddgst:-false} 00:23:06.278 }, 00:23:06.278 "method": "bdev_nvme_attach_controller" 00:23:06.278 } 00:23:06.278 EOF 00:23:06.278 )") 00:23:06.278 11:49:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:06.278 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.278 11:49:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:06.278 11:49:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:06.278 11:49:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:06.278 "params": { 00:23:06.278 "name": "Nvme1", 00:23:06.278 "trtype": "tcp", 00:23:06.278 "traddr": "10.0.0.2", 00:23:06.278 "adrfam": "ipv4", 00:23:06.278 "trsvcid": "4420", 00:23:06.278 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.278 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:06.278 "hdgst": false, 00:23:06.278 "ddgst": false 00:23:06.278 }, 00:23:06.278 "method": "bdev_nvme_attach_controller" 00:23:06.278 },{ 00:23:06.278 "params": { 00:23:06.278 "name": "Nvme2", 00:23:06.278 "trtype": "tcp", 00:23:06.278 "traddr": "10.0.0.2", 00:23:06.278 "adrfam": "ipv4", 00:23:06.278 "trsvcid": "4420", 00:23:06.278 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:06.278 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:06.278 "hdgst": false, 00:23:06.278 "ddgst": false 00:23:06.278 }, 00:23:06.278 "method": "bdev_nvme_attach_controller" 00:23:06.278 },{ 00:23:06.278 "params": { 00:23:06.278 "name": "Nvme3", 00:23:06.278 "trtype": "tcp", 00:23:06.278 "traddr": "10.0.0.2", 00:23:06.278 "adrfam": "ipv4", 00:23:06.278 "trsvcid": "4420", 00:23:06.278 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:06.278 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:06.278 "hdgst": false, 00:23:06.278 "ddgst": false 00:23:06.278 }, 00:23:06.278 "method": "bdev_nvme_attach_controller" 00:23:06.278 },{ 00:23:06.278 "params": { 00:23:06.278 "name": "Nvme4", 00:23:06.278 "trtype": "tcp", 00:23:06.278 "traddr": "10.0.0.2", 00:23:06.278 "adrfam": "ipv4", 00:23:06.278 "trsvcid": "4420", 00:23:06.278 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:06.278 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:06.278 "hdgst": false, 00:23:06.278 "ddgst": false 00:23:06.278 }, 00:23:06.278 "method": "bdev_nvme_attach_controller" 00:23:06.278 },{ 00:23:06.278 "params": { 00:23:06.278 "name": "Nvme5", 00:23:06.278 "trtype": "tcp", 00:23:06.278 "traddr": "10.0.0.2", 00:23:06.278 "adrfam": "ipv4", 00:23:06.278 "trsvcid": "4420", 00:23:06.278 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:06.278 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:06.278 "hdgst": false, 00:23:06.278 "ddgst": false 00:23:06.278 }, 00:23:06.278 "method": "bdev_nvme_attach_controller" 00:23:06.278 },{ 00:23:06.278 "params": { 00:23:06.278 "name": "Nvme6", 00:23:06.278 "trtype": "tcp", 00:23:06.278 "traddr": "10.0.0.2", 00:23:06.278 "adrfam": "ipv4", 00:23:06.278 "trsvcid": "4420", 00:23:06.278 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:06.278 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:06.278 "hdgst": false, 00:23:06.278 "ddgst": false 00:23:06.278 }, 00:23:06.278 "method": "bdev_nvme_attach_controller" 00:23:06.278 },{ 00:23:06.278 "params": { 00:23:06.278 "name": "Nvme7", 00:23:06.278 "trtype": "tcp", 00:23:06.278 "traddr": "10.0.0.2", 00:23:06.278 "adrfam": "ipv4", 00:23:06.278 "trsvcid": "4420", 00:23:06.278 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:06.278 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:06.278 "hdgst": false, 00:23:06.278 "ddgst": false 00:23:06.278 }, 00:23:06.278 "method": "bdev_nvme_attach_controller" 00:23:06.278 },{ 00:23:06.278 "params": { 00:23:06.278 "name": "Nvme8", 00:23:06.278 "trtype": "tcp", 00:23:06.278 "traddr": "10.0.0.2", 00:23:06.278 "adrfam": "ipv4", 00:23:06.278 "trsvcid": "4420", 00:23:06.278 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:06.278 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:06.278 "hdgst": false, 00:23:06.278 "ddgst": false 00:23:06.278 }, 00:23:06.278 "method": "bdev_nvme_attach_controller" 00:23:06.278 },{ 00:23:06.278 "params": { 00:23:06.278 "name": "Nvme9", 00:23:06.278 "trtype": "tcp", 00:23:06.278 "traddr": "10.0.0.2", 00:23:06.278 "adrfam": "ipv4", 00:23:06.278 "trsvcid": "4420", 00:23:06.278 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:06.278 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:06.278 "hdgst": false, 00:23:06.278 "ddgst": false 00:23:06.278 }, 00:23:06.278 "method": "bdev_nvme_attach_controller" 00:23:06.278 },{ 00:23:06.278 "params": { 00:23:06.278 "name": "Nvme10", 00:23:06.278 "trtype": "tcp", 00:23:06.278 "traddr": "10.0.0.2", 00:23:06.278 "adrfam": "ipv4", 00:23:06.278 "trsvcid": "4420", 00:23:06.278 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:06.278 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:06.278 "hdgst": false, 00:23:06.278 "ddgst": false 00:23:06.278 }, 00:23:06.278 "method": "bdev_nvme_attach_controller" 00:23:06.278 }' 00:23:06.279 [2024-07-15 11:49:34.077573] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.279 [2024-07-15 11:49:34.147862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.657 Running I/O for 1 seconds... 00:23:08.594 00:23:08.595 Latency(us) 00:23:08.595 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.595 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:08.595 Verification LBA range: start 0x0 length 0x400 00:23:08.595 Nvme1n1 : 1.02 250.62 15.66 0.00 0.00 253122.15 33135.00 206359.76 00:23:08.595 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:08.595 Verification LBA range: start 0x0 length 0x400 00:23:08.595 Nvme2n1 : 1.11 288.70 18.04 0.00 0.00 216639.41 19922.94 203843.17 00:23:08.595 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:08.595 Verification LBA range: start 0x0 length 0x400 00:23:08.595 Nvme3n1 : 1.11 287.18 17.95 0.00 0.00 214884.19 18769.51 239914.19 00:23:08.595 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:08.595 Verification LBA range: start 0x0 length 0x400 00:23:08.595 Nvme4n1 : 1.10 291.15 18.20 0.00 0.00 208854.38 16777.22 204682.04 00:23:08.595 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:08.595 Verification LBA range: start 0x0 length 0x400 00:23:08.595 Nvme5n1 : 1.09 234.32 14.65 0.00 0.00 255811.79 18874.37 234881.02 00:23:08.595 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:08.595 Verification LBA range: start 0x0 length 0x400 00:23:08.595 Nvme6n1 : 1.11 288.50 18.03 0.00 0.00 204750.68 16357.79 204682.04 00:23:08.595 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:08.595 Verification LBA range: start 0x0 length 0x400 00:23:08.595 Nvme7n1 : 1.14 336.85 21.05 0.00 0.00 173484.71 16986.93 200487.73 00:23:08.595 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:08.595 Verification LBA range: start 0x0 length 0x400 00:23:08.595 Nvme8n1 : 1.12 286.15 17.88 0.00 0.00 200922.73 18245.22 210554.06 00:23:08.595 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:08.595 Verification LBA range: start 0x0 length 0x400 00:23:08.595 Nvme9n1 : 1.15 335.36 20.96 0.00 0.00 169366.19 16357.79 199648.87 00:23:08.595 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:08.595 Verification LBA range: start 0x0 length 0x400 00:23:08.595 Nvme10n1 : 1.17 328.93 20.56 0.00 0.00 170609.32 8178.89 209715.20 00:23:08.595 =================================================================================================================== 00:23:08.595 Total : 2927.76 182.98 0.00 0.00 202877.53 8178.89 239914.19 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:08.855 rmmod nvme_tcp 00:23:08.855 rmmod nvme_fabrics 00:23:08.855 rmmod nvme_keyring 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2035068 ']' 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2035068 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2035068 ']' 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2035068 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2035068 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2035068' 00:23:08.855 killing process with pid 2035068 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2035068 00:23:08.855 11:49:36 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2035068 00:23:09.492 11:49:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:09.492 11:49:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:09.492 11:49:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:09.492 11:49:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:09.492 11:49:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:09.492 11:49:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.492 11:49:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:09.492 11:49:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:11.397 00:23:11.397 real 0m16.434s 00:23:11.397 user 0m34.094s 00:23:11.397 sys 0m6.960s 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:11.397 ************************************ 00:23:11.397 END TEST nvmf_shutdown_tc1 00:23:11.397 ************************************ 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:11.397 ************************************ 00:23:11.397 START TEST nvmf_shutdown_tc2 00:23:11.397 ************************************ 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:11.397 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:11.398 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:11.398 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:11.398 Found net devices under 0000:af:00.0: cvl_0_0 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:11.398 Found net devices under 0000:af:00.1: cvl_0_1 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:11.398 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:11.657 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:11.657 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:11.657 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:11.657 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:11.657 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:11.657 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:11.658 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:11.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:23:11.658 00:23:11.658 --- 10.0.0.2 ping statistics --- 00:23:11.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.658 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:23:11.658 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:11.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:23:11.658 00:23:11.658 --- 10.0.0.1 ping statistics --- 00:23:11.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.658 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:23:11.658 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.658 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:11.658 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:11.658 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.658 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:11.658 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:11.658 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.658 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:11.658 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:11.917 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:11.917 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:11.917 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:11.917 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:11.917 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2036840 00:23:11.917 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2036840 00:23:11.917 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:11.917 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2036840 ']' 00:23:11.917 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.917 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.917 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.917 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.917 11:49:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:11.917 [2024-07-15 11:49:39.851286] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:23:11.917 [2024-07-15 11:49:39.851334] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.917 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.917 [2024-07-15 11:49:39.927887] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:11.917 [2024-07-15 11:49:40.021284] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.917 [2024-07-15 11:49:40.021331] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.917 [2024-07-15 11:49:40.021345] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.917 [2024-07-15 11:49:40.021357] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.917 [2024-07-15 11:49:40.021367] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.917 [2024-07-15 11:49:40.021418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.917 [2024-07-15 11:49:40.021508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:11.917 [2024-07-15 11:49:40.021618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.917 [2024-07-15 11:49:40.021618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.854 [2024-07-15 11:49:40.695958] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.854 11:49:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:12.854 Malloc1 00:23:12.854 [2024-07-15 11:49:40.809247] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.854 Malloc2 00:23:12.854 Malloc3 00:23:12.854 Malloc4 00:23:13.112 Malloc5 00:23:13.112 Malloc6 00:23:13.112 Malloc7 00:23:13.112 Malloc8 00:23:13.112 Malloc9 00:23:13.112 Malloc10 00:23:13.112 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.112 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:13.112 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2037150 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2037150 /var/tmp/bdevperf.sock 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2037150 ']' 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:13.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:13.372 { 00:23:13.372 "params": { 00:23:13.372 "name": "Nvme$subsystem", 00:23:13.372 "trtype": "$TEST_TRANSPORT", 00:23:13.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.372 "adrfam": "ipv4", 00:23:13.372 "trsvcid": "$NVMF_PORT", 00:23:13.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.372 "hdgst": ${hdgst:-false}, 00:23:13.372 "ddgst": ${ddgst:-false} 00:23:13.372 }, 00:23:13.372 "method": "bdev_nvme_attach_controller" 00:23:13.372 } 00:23:13.372 EOF 00:23:13.372 )") 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:13.372 { 00:23:13.372 "params": { 00:23:13.372 "name": "Nvme$subsystem", 00:23:13.372 "trtype": "$TEST_TRANSPORT", 00:23:13.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.372 "adrfam": "ipv4", 00:23:13.372 "trsvcid": "$NVMF_PORT", 00:23:13.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.372 "hdgst": ${hdgst:-false}, 00:23:13.372 "ddgst": ${ddgst:-false} 00:23:13.372 }, 00:23:13.372 "method": "bdev_nvme_attach_controller" 00:23:13.372 } 00:23:13.372 EOF 00:23:13.372 )") 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:13.372 { 00:23:13.372 "params": { 00:23:13.372 "name": "Nvme$subsystem", 00:23:13.372 "trtype": "$TEST_TRANSPORT", 00:23:13.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.372 "adrfam": "ipv4", 00:23:13.372 "trsvcid": "$NVMF_PORT", 00:23:13.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.372 "hdgst": ${hdgst:-false}, 00:23:13.372 "ddgst": ${ddgst:-false} 00:23:13.372 }, 00:23:13.372 "method": "bdev_nvme_attach_controller" 00:23:13.372 } 00:23:13.372 EOF 00:23:13.372 )") 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:13.372 { 00:23:13.372 "params": { 00:23:13.372 "name": "Nvme$subsystem", 00:23:13.372 "trtype": "$TEST_TRANSPORT", 00:23:13.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.372 "adrfam": "ipv4", 00:23:13.372 "trsvcid": "$NVMF_PORT", 00:23:13.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.372 "hdgst": ${hdgst:-false}, 00:23:13.372 "ddgst": ${ddgst:-false} 00:23:13.372 }, 00:23:13.372 "method": "bdev_nvme_attach_controller" 00:23:13.372 } 00:23:13.372 EOF 00:23:13.372 )") 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:13.372 { 00:23:13.372 "params": { 00:23:13.372 "name": "Nvme$subsystem", 00:23:13.372 "trtype": "$TEST_TRANSPORT", 00:23:13.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.372 "adrfam": "ipv4", 00:23:13.372 "trsvcid": "$NVMF_PORT", 00:23:13.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.372 "hdgst": ${hdgst:-false}, 00:23:13.372 "ddgst": ${ddgst:-false} 00:23:13.372 }, 00:23:13.372 "method": "bdev_nvme_attach_controller" 00:23:13.372 } 00:23:13.372 EOF 00:23:13.372 )") 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:13.372 { 00:23:13.372 "params": { 00:23:13.372 "name": "Nvme$subsystem", 00:23:13.372 "trtype": "$TEST_TRANSPORT", 00:23:13.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.372 "adrfam": "ipv4", 00:23:13.372 "trsvcid": "$NVMF_PORT", 00:23:13.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.372 "hdgst": ${hdgst:-false}, 00:23:13.372 "ddgst": ${ddgst:-false} 00:23:13.372 }, 00:23:13.372 "method": "bdev_nvme_attach_controller" 00:23:13.372 } 00:23:13.372 EOF 00:23:13.372 )") 00:23:13.372 [2024-07-15 11:49:41.310102] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:23:13.372 [2024-07-15 11:49:41.310156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2037150 ] 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:13.372 { 00:23:13.372 "params": { 00:23:13.372 "name": "Nvme$subsystem", 00:23:13.372 "trtype": "$TEST_TRANSPORT", 00:23:13.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.372 "adrfam": "ipv4", 00:23:13.372 "trsvcid": "$NVMF_PORT", 00:23:13.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.372 "hdgst": ${hdgst:-false}, 00:23:13.372 "ddgst": ${ddgst:-false} 00:23:13.372 }, 00:23:13.372 "method": "bdev_nvme_attach_controller" 00:23:13.372 } 00:23:13.372 EOF 00:23:13.372 )") 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:13.372 { 00:23:13.372 "params": { 00:23:13.372 "name": "Nvme$subsystem", 00:23:13.372 "trtype": "$TEST_TRANSPORT", 00:23:13.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.372 "adrfam": "ipv4", 00:23:13.372 "trsvcid": "$NVMF_PORT", 00:23:13.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.372 "hdgst": ${hdgst:-false}, 00:23:13.372 "ddgst": ${ddgst:-false} 00:23:13.372 }, 00:23:13.372 "method": "bdev_nvme_attach_controller" 00:23:13.372 } 00:23:13.372 EOF 00:23:13.372 )") 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:13.372 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:13.373 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:13.373 { 00:23:13.373 "params": { 00:23:13.373 "name": "Nvme$subsystem", 00:23:13.373 "trtype": "$TEST_TRANSPORT", 00:23:13.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.373 "adrfam": "ipv4", 00:23:13.373 "trsvcid": "$NVMF_PORT", 00:23:13.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.373 "hdgst": ${hdgst:-false}, 00:23:13.373 "ddgst": ${ddgst:-false} 00:23:13.373 }, 00:23:13.373 "method": "bdev_nvme_attach_controller" 00:23:13.373 } 00:23:13.373 EOF 00:23:13.373 )") 00:23:13.373 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:13.373 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:13.373 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:13.373 { 00:23:13.373 "params": { 00:23:13.373 "name": "Nvme$subsystem", 00:23:13.373 "trtype": "$TEST_TRANSPORT", 00:23:13.373 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:13.373 "adrfam": "ipv4", 00:23:13.373 "trsvcid": "$NVMF_PORT", 00:23:13.373 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:13.373 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:13.373 "hdgst": ${hdgst:-false}, 00:23:13.373 "ddgst": ${ddgst:-false} 00:23:13.373 }, 00:23:13.373 "method": "bdev_nvme_attach_controller" 00:23:13.373 } 00:23:13.373 EOF 00:23:13.373 )") 00:23:13.373 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:13.373 EAL: No free 2048 kB hugepages reported on node 1 00:23:13.373 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:13.373 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:13.373 11:49:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:13.373 "params": { 00:23:13.373 "name": "Nvme1", 00:23:13.373 "trtype": "tcp", 00:23:13.373 "traddr": "10.0.0.2", 00:23:13.373 "adrfam": "ipv4", 00:23:13.373 "trsvcid": "4420", 00:23:13.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:13.373 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:13.373 "hdgst": false, 00:23:13.373 "ddgst": false 00:23:13.373 }, 00:23:13.373 "method": "bdev_nvme_attach_controller" 00:23:13.373 },{ 00:23:13.373 "params": { 00:23:13.373 "name": "Nvme2", 00:23:13.373 "trtype": "tcp", 00:23:13.373 "traddr": "10.0.0.2", 00:23:13.373 "adrfam": "ipv4", 00:23:13.373 "trsvcid": "4420", 00:23:13.373 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:13.373 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:13.373 "hdgst": false, 00:23:13.373 "ddgst": false 00:23:13.373 }, 00:23:13.373 "method": "bdev_nvme_attach_controller" 00:23:13.373 },{ 00:23:13.373 "params": { 00:23:13.373 "name": "Nvme3", 00:23:13.373 "trtype": "tcp", 00:23:13.373 "traddr": "10.0.0.2", 00:23:13.373 "adrfam": "ipv4", 00:23:13.373 "trsvcid": "4420", 00:23:13.373 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:13.373 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:13.373 "hdgst": false, 00:23:13.373 "ddgst": false 00:23:13.373 }, 00:23:13.373 "method": "bdev_nvme_attach_controller" 00:23:13.373 },{ 00:23:13.373 "params": { 00:23:13.373 "name": "Nvme4", 00:23:13.373 "trtype": "tcp", 00:23:13.373 "traddr": "10.0.0.2", 00:23:13.373 "adrfam": "ipv4", 00:23:13.373 "trsvcid": "4420", 00:23:13.373 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:13.373 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:13.373 "hdgst": false, 00:23:13.373 "ddgst": false 00:23:13.373 }, 00:23:13.373 "method": "bdev_nvme_attach_controller" 00:23:13.373 },{ 00:23:13.373 "params": { 00:23:13.373 "name": "Nvme5", 00:23:13.373 "trtype": "tcp", 00:23:13.373 "traddr": "10.0.0.2", 00:23:13.373 "adrfam": "ipv4", 00:23:13.373 "trsvcid": "4420", 00:23:13.373 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:13.373 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:13.373 "hdgst": false, 00:23:13.373 "ddgst": false 00:23:13.373 }, 00:23:13.373 "method": "bdev_nvme_attach_controller" 00:23:13.373 },{ 00:23:13.373 "params": { 00:23:13.373 "name": "Nvme6", 00:23:13.373 "trtype": "tcp", 00:23:13.373 "traddr": "10.0.0.2", 00:23:13.373 "adrfam": "ipv4", 00:23:13.373 "trsvcid": "4420", 00:23:13.373 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:13.373 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:13.373 "hdgst": false, 00:23:13.373 "ddgst": false 00:23:13.373 }, 00:23:13.373 "method": "bdev_nvme_attach_controller" 00:23:13.373 },{ 00:23:13.373 "params": { 00:23:13.373 "name": "Nvme7", 00:23:13.373 "trtype": "tcp", 00:23:13.373 "traddr": "10.0.0.2", 00:23:13.373 "adrfam": "ipv4", 00:23:13.373 "trsvcid": "4420", 00:23:13.373 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:13.373 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:13.373 "hdgst": false, 00:23:13.373 "ddgst": false 00:23:13.373 }, 00:23:13.373 "method": "bdev_nvme_attach_controller" 00:23:13.373 },{ 00:23:13.373 "params": { 00:23:13.373 "name": "Nvme8", 00:23:13.373 "trtype": "tcp", 00:23:13.373 "traddr": "10.0.0.2", 00:23:13.373 "adrfam": "ipv4", 00:23:13.373 "trsvcid": "4420", 00:23:13.373 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:13.373 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:13.373 "hdgst": false, 00:23:13.373 "ddgst": false 00:23:13.373 }, 00:23:13.373 "method": "bdev_nvme_attach_controller" 00:23:13.373 },{ 00:23:13.373 "params": { 00:23:13.373 "name": "Nvme9", 00:23:13.373 "trtype": "tcp", 00:23:13.373 "traddr": "10.0.0.2", 00:23:13.373 "adrfam": "ipv4", 00:23:13.373 "trsvcid": "4420", 00:23:13.373 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:13.373 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:13.373 "hdgst": false, 00:23:13.373 "ddgst": false 00:23:13.373 }, 00:23:13.373 "method": "bdev_nvme_attach_controller" 00:23:13.373 },{ 00:23:13.373 "params": { 00:23:13.373 "name": "Nvme10", 00:23:13.373 "trtype": "tcp", 00:23:13.373 "traddr": "10.0.0.2", 00:23:13.373 "adrfam": "ipv4", 00:23:13.373 "trsvcid": "4420", 00:23:13.373 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:13.373 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:13.373 "hdgst": false, 00:23:13.373 "ddgst": false 00:23:13.373 }, 00:23:13.373 "method": "bdev_nvme_attach_controller" 00:23:13.373 }' 00:23:13.373 [2024-07-15 11:49:41.383381] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.373 [2024-07-15 11:49:41.455379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.751 Running I/O for 10 seconds... 00:23:14.751 11:49:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.751 11:49:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:14.751 11:49:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:14.751 11:49:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.751 11:49:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:15.010 11:49:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.010 11:49:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:15.010 11:49:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:15.010 11:49:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:15.010 11:49:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:15.010 11:49:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:15.010 11:49:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:15.010 11:49:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:15.010 11:49:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:15.010 11:49:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:15.010 11:49:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.010 11:49:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:15.010 11:49:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.010 11:49:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:15.010 11:49:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:15.010 11:49:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:15.267 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:15.267 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:15.267 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:15.267 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:15.267 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.267 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:15.267 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.267 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:15.267 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:15.267 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:15.526 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:15.526 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:15.526 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:15.526 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:15.526 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.526 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:15.526 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.526 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:23:15.526 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:23:15.526 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:15.526 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:15.526 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:15.526 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2037150 00:23:15.526 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2037150 ']' 00:23:15.526 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2037150 00:23:15.527 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:15.527 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:15.527 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2037150 00:23:15.785 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:15.785 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:15.785 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2037150' 00:23:15.785 killing process with pid 2037150 00:23:15.785 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2037150 00:23:15.785 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2037150 00:23:15.785 Received shutdown signal, test time was about 0.908412 seconds 00:23:15.785 00:23:15.785 Latency(us) 00:23:15.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.785 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.785 Verification LBA range: start 0x0 length 0x400 00:23:15.785 Nvme1n1 : 0.89 291.40 18.21 0.00 0.00 216667.26 3355.44 208876.34 00:23:15.785 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.785 Verification LBA range: start 0x0 length 0x400 00:23:15.785 Nvme2n1 : 0.87 294.03 18.38 0.00 0.00 211535.67 17720.93 200487.73 00:23:15.785 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.785 Verification LBA range: start 0x0 length 0x400 00:23:15.785 Nvme3n1 : 0.90 354.27 22.14 0.00 0.00 172807.62 16252.93 202165.45 00:23:15.785 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.785 Verification LBA range: start 0x0 length 0x400 00:23:15.785 Nvme4n1 : 0.88 289.84 18.11 0.00 0.00 207236.71 19084.08 219781.53 00:23:15.785 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.785 Verification LBA range: start 0x0 length 0x400 00:23:15.785 Nvme5n1 : 0.90 288.58 18.04 0.00 0.00 204168.33 3185.05 212231.78 00:23:15.785 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.785 Verification LBA range: start 0x0 length 0x400 00:23:15.785 Nvme6n1 : 0.91 282.01 17.63 0.00 0.00 206029.00 16357.79 219781.53 00:23:15.785 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.785 Verification LBA range: start 0x0 length 0x400 00:23:15.785 Nvme7n1 : 0.88 291.90 18.24 0.00 0.00 194192.38 14260.63 200487.73 00:23:15.785 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.785 Verification LBA range: start 0x0 length 0x400 00:23:15.785 Nvme8n1 : 0.89 287.46 17.97 0.00 0.00 194293.56 17301.50 192937.98 00:23:15.785 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.785 Verification LBA range: start 0x0 length 0x400 00:23:15.785 Nvme9n1 : 0.87 219.72 13.73 0.00 0.00 248318.09 19293.80 231525.58 00:23:15.785 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.785 Verification LBA range: start 0x0 length 0x400 00:23:15.785 Nvme10n1 : 0.90 284.72 17.79 0.00 0.00 189057.43 17616.08 207198.62 00:23:15.785 =================================================================================================================== 00:23:15.785 Total : 2883.93 180.25 0.00 0.00 202567.38 3185.05 231525.58 00:23:16.042 11:49:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:16.975 11:49:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2036840 00:23:16.976 11:49:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:16.976 11:49:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:16.976 11:49:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:16.976 11:49:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:16.976 11:49:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:16.976 11:49:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:16.976 11:49:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:16.976 11:49:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:16.976 11:49:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:16.976 11:49:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:16.976 11:49:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:16.976 rmmod nvme_tcp 00:23:16.976 rmmod nvme_fabrics 00:23:16.976 rmmod nvme_keyring 00:23:16.976 11:49:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:16.976 11:49:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:16.976 11:49:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:16.976 11:49:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2036840 ']' 00:23:16.976 11:49:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2036840 00:23:16.976 11:49:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2036840 ']' 00:23:16.976 11:49:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2036840 00:23:16.976 11:49:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:16.976 11:49:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:16.976 11:49:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2036840 00:23:16.976 11:49:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:16.976 11:49:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:16.976 11:49:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2036840' 00:23:16.976 killing process with pid 2036840 00:23:16.976 11:49:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2036840 00:23:16.976 11:49:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2036840 00:23:17.542 11:49:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:17.542 11:49:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:17.542 11:49:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:17.542 11:49:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:17.542 11:49:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:17.542 11:49:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.542 11:49:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.542 11:49:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.450 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:19.450 00:23:19.450 real 0m8.067s 00:23:19.450 user 0m23.861s 00:23:19.450 sys 0m1.687s 00:23:19.450 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:19.450 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:19.450 ************************************ 00:23:19.450 END TEST nvmf_shutdown_tc2 00:23:19.450 ************************************ 00:23:19.450 11:49:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:19.450 11:49:47 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:19.450 11:49:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:19.450 11:49:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:19.450 11:49:47 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:19.710 ************************************ 00:23:19.710 START TEST nvmf_shutdown_tc3 00:23:19.710 ************************************ 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:19.710 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:19.711 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:19.711 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:19.711 Found net devices under 0000:af:00.0: cvl_0_0 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:19.711 Found net devices under 0000:af:00.1: cvl_0_1 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:19.711 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:19.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:19.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:23:19.971 00:23:19.971 --- 10.0.0.2 ping statistics --- 00:23:19.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.971 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:19.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:19.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:23:19.971 00:23:19.971 --- 10.0.0.1 ping statistics --- 00:23:19.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:19.971 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2038345 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2038345 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2038345 ']' 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:19.971 11:49:47 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:19.971 [2024-07-15 11:49:48.001856] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:23:19.971 [2024-07-15 11:49:48.001907] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.971 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.231 [2024-07-15 11:49:48.076152] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:20.231 [2024-07-15 11:49:48.149928] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.231 [2024-07-15 11:49:48.149968] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.231 [2024-07-15 11:49:48.149978] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.231 [2024-07-15 11:49:48.149987] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.231 [2024-07-15 11:49:48.149995] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.231 [2024-07-15 11:49:48.150094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.231 [2024-07-15 11:49:48.150114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:20.231 [2024-07-15 11:49:48.150226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.231 [2024-07-15 11:49:48.150227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:20.800 [2024-07-15 11:49:48.861782] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:20.800 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:21.060 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:21.060 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:21.060 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:21.060 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:21.060 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:21.060 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:21.060 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:21.060 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:21.060 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:21.060 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:21.060 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:21.060 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.060 11:49:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:21.060 Malloc1 00:23:21.060 [2024-07-15 11:49:48.972400] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.060 Malloc2 00:23:21.060 Malloc3 00:23:21.060 Malloc4 00:23:21.060 Malloc5 00:23:21.060 Malloc6 00:23:21.320 Malloc7 00:23:21.320 Malloc8 00:23:21.320 Malloc9 00:23:21.320 Malloc10 00:23:21.320 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.320 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:21.320 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:21.320 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:21.320 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2038655 00:23:21.320 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2038655 /var/tmp/bdevperf.sock 00:23:21.320 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2038655 ']' 00:23:21.320 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:21.320 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:21.320 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:21.320 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:21.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:21.320 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:21.320 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:21.320 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:21.320 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:21.320 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:21.320 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.320 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.320 { 00:23:21.320 "params": { 00:23:21.320 "name": "Nvme$subsystem", 00:23:21.320 "trtype": "$TEST_TRANSPORT", 00:23:21.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.320 "adrfam": "ipv4", 00:23:21.320 "trsvcid": "$NVMF_PORT", 00:23:21.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.320 "hdgst": ${hdgst:-false}, 00:23:21.320 "ddgst": ${ddgst:-false} 00:23:21.320 }, 00:23:21.320 "method": "bdev_nvme_attach_controller" 00:23:21.320 } 00:23:21.320 EOF 00:23:21.320 )") 00:23:21.320 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:21.320 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.320 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.320 { 00:23:21.320 "params": { 00:23:21.320 "name": "Nvme$subsystem", 00:23:21.320 "trtype": "$TEST_TRANSPORT", 00:23:21.320 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.320 "adrfam": "ipv4", 00:23:21.320 "trsvcid": "$NVMF_PORT", 00:23:21.320 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.320 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.320 "hdgst": ${hdgst:-false}, 00:23:21.320 "ddgst": ${ddgst:-false} 00:23:21.320 }, 00:23:21.320 "method": "bdev_nvme_attach_controller" 00:23:21.320 } 00:23:21.320 EOF 00:23:21.320 )") 00:23:21.320 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:21.580 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.580 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.580 { 00:23:21.580 "params": { 00:23:21.580 "name": "Nvme$subsystem", 00:23:21.580 "trtype": "$TEST_TRANSPORT", 00:23:21.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.580 "adrfam": "ipv4", 00:23:21.580 "trsvcid": "$NVMF_PORT", 00:23:21.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.580 "hdgst": ${hdgst:-false}, 00:23:21.580 "ddgst": ${ddgst:-false} 00:23:21.580 }, 00:23:21.580 "method": "bdev_nvme_attach_controller" 00:23:21.580 } 00:23:21.580 EOF 00:23:21.580 )") 00:23:21.580 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:21.580 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.580 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.580 { 00:23:21.580 "params": { 00:23:21.580 "name": "Nvme$subsystem", 00:23:21.580 "trtype": "$TEST_TRANSPORT", 00:23:21.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.580 "adrfam": "ipv4", 00:23:21.580 "trsvcid": "$NVMF_PORT", 00:23:21.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.580 "hdgst": ${hdgst:-false}, 00:23:21.580 "ddgst": ${ddgst:-false} 00:23:21.580 }, 00:23:21.580 "method": "bdev_nvme_attach_controller" 00:23:21.580 } 00:23:21.580 EOF 00:23:21.580 )") 00:23:21.580 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:21.580 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.580 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.580 { 00:23:21.580 "params": { 00:23:21.580 "name": "Nvme$subsystem", 00:23:21.580 "trtype": "$TEST_TRANSPORT", 00:23:21.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.580 "adrfam": "ipv4", 00:23:21.580 "trsvcid": "$NVMF_PORT", 00:23:21.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.580 "hdgst": ${hdgst:-false}, 00:23:21.580 "ddgst": ${ddgst:-false} 00:23:21.580 }, 00:23:21.580 "method": "bdev_nvme_attach_controller" 00:23:21.580 } 00:23:21.580 EOF 00:23:21.580 )") 00:23:21.580 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:21.580 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.580 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.580 { 00:23:21.580 "params": { 00:23:21.580 "name": "Nvme$subsystem", 00:23:21.580 "trtype": "$TEST_TRANSPORT", 00:23:21.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.580 "adrfam": "ipv4", 00:23:21.580 "trsvcid": "$NVMF_PORT", 00:23:21.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.580 "hdgst": ${hdgst:-false}, 00:23:21.580 "ddgst": ${ddgst:-false} 00:23:21.580 }, 00:23:21.580 "method": "bdev_nvme_attach_controller" 00:23:21.580 } 00:23:21.580 EOF 00:23:21.580 )") 00:23:21.580 [2024-07-15 11:49:49.455593] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:23:21.580 [2024-07-15 11:49:49.455646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2038655 ] 00:23:21.580 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:21.580 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.580 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.580 { 00:23:21.580 "params": { 00:23:21.580 "name": "Nvme$subsystem", 00:23:21.580 "trtype": "$TEST_TRANSPORT", 00:23:21.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.580 "adrfam": "ipv4", 00:23:21.580 "trsvcid": "$NVMF_PORT", 00:23:21.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.580 "hdgst": ${hdgst:-false}, 00:23:21.580 "ddgst": ${ddgst:-false} 00:23:21.580 }, 00:23:21.580 "method": "bdev_nvme_attach_controller" 00:23:21.580 } 00:23:21.580 EOF 00:23:21.580 )") 00:23:21.580 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:21.580 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.580 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.580 { 00:23:21.580 "params": { 00:23:21.580 "name": "Nvme$subsystem", 00:23:21.581 "trtype": "$TEST_TRANSPORT", 00:23:21.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.581 "adrfam": "ipv4", 00:23:21.581 "trsvcid": "$NVMF_PORT", 00:23:21.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.581 "hdgst": ${hdgst:-false}, 00:23:21.581 "ddgst": ${ddgst:-false} 00:23:21.581 }, 00:23:21.581 "method": "bdev_nvme_attach_controller" 00:23:21.581 } 00:23:21.581 EOF 00:23:21.581 )") 00:23:21.581 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:21.581 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.581 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.581 { 00:23:21.581 "params": { 00:23:21.581 "name": "Nvme$subsystem", 00:23:21.581 "trtype": "$TEST_TRANSPORT", 00:23:21.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.581 "adrfam": "ipv4", 00:23:21.581 "trsvcid": "$NVMF_PORT", 00:23:21.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.581 "hdgst": ${hdgst:-false}, 00:23:21.581 "ddgst": ${ddgst:-false} 00:23:21.581 }, 00:23:21.581 "method": "bdev_nvme_attach_controller" 00:23:21.581 } 00:23:21.581 EOF 00:23:21.581 )") 00:23:21.581 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:21.581 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:21.581 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:21.581 { 00:23:21.581 "params": { 00:23:21.581 "name": "Nvme$subsystem", 00:23:21.581 "trtype": "$TEST_TRANSPORT", 00:23:21.581 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:21.581 "adrfam": "ipv4", 00:23:21.581 "trsvcid": "$NVMF_PORT", 00:23:21.581 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:21.581 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:21.581 "hdgst": ${hdgst:-false}, 00:23:21.581 "ddgst": ${ddgst:-false} 00:23:21.581 }, 00:23:21.581 "method": "bdev_nvme_attach_controller" 00:23:21.581 } 00:23:21.581 EOF 00:23:21.581 )") 00:23:21.581 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.581 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:21.581 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:21.581 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:21.581 11:49:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:21.581 "params": { 00:23:21.581 "name": "Nvme1", 00:23:21.581 "trtype": "tcp", 00:23:21.581 "traddr": "10.0.0.2", 00:23:21.581 "adrfam": "ipv4", 00:23:21.581 "trsvcid": "4420", 00:23:21.581 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.581 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:21.581 "hdgst": false, 00:23:21.581 "ddgst": false 00:23:21.581 }, 00:23:21.581 "method": "bdev_nvme_attach_controller" 00:23:21.581 },{ 00:23:21.581 "params": { 00:23:21.581 "name": "Nvme2", 00:23:21.581 "trtype": "tcp", 00:23:21.581 "traddr": "10.0.0.2", 00:23:21.581 "adrfam": "ipv4", 00:23:21.581 "trsvcid": "4420", 00:23:21.581 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:21.581 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:21.581 "hdgst": false, 00:23:21.581 "ddgst": false 00:23:21.581 }, 00:23:21.581 "method": "bdev_nvme_attach_controller" 00:23:21.581 },{ 00:23:21.581 "params": { 00:23:21.581 "name": "Nvme3", 00:23:21.581 "trtype": "tcp", 00:23:21.581 "traddr": "10.0.0.2", 00:23:21.581 "adrfam": "ipv4", 00:23:21.581 "trsvcid": "4420", 00:23:21.581 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:21.581 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:21.581 "hdgst": false, 00:23:21.581 "ddgst": false 00:23:21.581 }, 00:23:21.581 "method": "bdev_nvme_attach_controller" 00:23:21.581 },{ 00:23:21.581 "params": { 00:23:21.581 "name": "Nvme4", 00:23:21.581 "trtype": "tcp", 00:23:21.581 "traddr": "10.0.0.2", 00:23:21.581 "adrfam": "ipv4", 00:23:21.581 "trsvcid": "4420", 00:23:21.581 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:21.581 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:21.581 "hdgst": false, 00:23:21.581 "ddgst": false 00:23:21.581 }, 00:23:21.581 "method": "bdev_nvme_attach_controller" 00:23:21.581 },{ 00:23:21.581 "params": { 00:23:21.581 "name": "Nvme5", 00:23:21.581 "trtype": "tcp", 00:23:21.581 "traddr": "10.0.0.2", 00:23:21.581 "adrfam": "ipv4", 00:23:21.581 "trsvcid": "4420", 00:23:21.581 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:21.581 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:21.581 "hdgst": false, 00:23:21.581 "ddgst": false 00:23:21.581 }, 00:23:21.581 "method": "bdev_nvme_attach_controller" 00:23:21.581 },{ 00:23:21.581 "params": { 00:23:21.581 "name": "Nvme6", 00:23:21.581 "trtype": "tcp", 00:23:21.581 "traddr": "10.0.0.2", 00:23:21.581 "adrfam": "ipv4", 00:23:21.581 "trsvcid": "4420", 00:23:21.581 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:21.581 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:21.581 "hdgst": false, 00:23:21.581 "ddgst": false 00:23:21.581 }, 00:23:21.581 "method": "bdev_nvme_attach_controller" 00:23:21.581 },{ 00:23:21.581 "params": { 00:23:21.581 "name": "Nvme7", 00:23:21.581 "trtype": "tcp", 00:23:21.581 "traddr": "10.0.0.2", 00:23:21.581 "adrfam": "ipv4", 00:23:21.581 "trsvcid": "4420", 00:23:21.581 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:21.581 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:21.581 "hdgst": false, 00:23:21.581 "ddgst": false 00:23:21.581 }, 00:23:21.581 "method": "bdev_nvme_attach_controller" 00:23:21.581 },{ 00:23:21.581 "params": { 00:23:21.581 "name": "Nvme8", 00:23:21.581 "trtype": "tcp", 00:23:21.581 "traddr": "10.0.0.2", 00:23:21.581 "adrfam": "ipv4", 00:23:21.581 "trsvcid": "4420", 00:23:21.581 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:21.581 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:21.581 "hdgst": false, 00:23:21.581 "ddgst": false 00:23:21.581 }, 00:23:21.581 "method": "bdev_nvme_attach_controller" 00:23:21.581 },{ 00:23:21.581 "params": { 00:23:21.581 "name": "Nvme9", 00:23:21.581 "trtype": "tcp", 00:23:21.581 "traddr": "10.0.0.2", 00:23:21.581 "adrfam": "ipv4", 00:23:21.581 "trsvcid": "4420", 00:23:21.581 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:21.581 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:21.581 "hdgst": false, 00:23:21.581 "ddgst": false 00:23:21.581 }, 00:23:21.581 "method": "bdev_nvme_attach_controller" 00:23:21.581 },{ 00:23:21.581 "params": { 00:23:21.581 "name": "Nvme10", 00:23:21.581 "trtype": "tcp", 00:23:21.581 "traddr": "10.0.0.2", 00:23:21.581 "adrfam": "ipv4", 00:23:21.581 "trsvcid": "4420", 00:23:21.581 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:21.581 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:21.581 "hdgst": false, 00:23:21.581 "ddgst": false 00:23:21.581 }, 00:23:21.581 "method": "bdev_nvme_attach_controller" 00:23:21.581 }' 00:23:21.581 [2024-07-15 11:49:49.528438] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.581 [2024-07-15 11:49:49.597508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.483 Running I/O for 10 seconds... 00:23:24.058 11:49:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.058 11:49:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:24.059 11:49:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:24.059 11:49:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.059 11:49:51 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=195 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2038345 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2038345 ']' 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2038345 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2038345 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2038345' 00:23:24.059 killing process with pid 2038345 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2038345 00:23:24.059 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2038345 00:23:24.059 [2024-07-15 11:49:52.121765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.121822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.121835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.121845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.121854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.121863] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.121878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.121886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.121896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.121905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.121913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.121923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.121933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.121942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.121950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.121959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.121968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.121977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.121986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.121995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122174] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122210] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122360] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122370] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.122379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184f320 is same with the state(5) to be set 00:23:24.059 [2024-07-15 11:49:52.124010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.059 [2024-07-15 11:49:52.124046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.060 [2024-07-15 11:49:52.124909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.060 [2024-07-15 11:49:52.124920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.124929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.124940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.124949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.124960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.124959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184fc60 is same with [2024-07-15 11:49:52.124970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:24.061 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.124984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.124986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x184fc60 is same with the state(5) to be set 00:23:24.061 [2024-07-15 11:49:52.124993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.125005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.125014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.125026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.125037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.125048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.125057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.125068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.125077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.125088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.125097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.125108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.125117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.125128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.125137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.125147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.125157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.125168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.125178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.125188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.125197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.125207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.125217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.125227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.125237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.125247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.125257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.125268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.125279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.125290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.125299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.125310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.125319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.125330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.125339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.125743] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cb99b0 was disconnected and freed. reset controller. 00:23:24.061 [2024-07-15 11:49:52.125835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.061 [2024-07-15 11:49:52.125860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.061 [2024-07-15 11:49:52.125870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.061 [2024-07-15 11:49:52.125880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.061 [2024-07-15 11:49:52.125885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:12[2024-07-15 11:49:52.125889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 the state(5) to be set 00:23:24.061 [2024-07-15 11:49:52.125899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.061 [2024-07-15 11:49:52.125900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.125909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.061 [2024-07-15 11:49:52.125916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.125919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.061 [2024-07-15 11:49:52.125927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.125929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.061 [2024-07-15 11:49:52.125939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:12[2024-07-15 11:49:52.125940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 the state(5) to be set 00:23:24.061 [2024-07-15 11:49:52.125950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 11:49:52.125951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 the state(5) to be set 00:23:24.061 [2024-07-15 11:49:52.125962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with [2024-07-15 11:49:52.125963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:1the state(5) to be set 00:23:24.061 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.125976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.061 [2024-07-15 11:49:52.125977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.125986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.061 [2024-07-15 11:49:52.125990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.125996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.061 [2024-07-15 11:49:52.126001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.126006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.061 [2024-07-15 11:49:52.126014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:1[2024-07-15 11:49:52.126015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 the state(5) to be set 00:23:24.061 [2024-07-15 11:49:52.126025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 11:49:52.126026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 the state(5) to be set 00:23:24.061 [2024-07-15 11:49:52.126037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with [2024-07-15 11:49:52.126038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:1the state(5) to be set 00:23:24.061 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.126047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.061 [2024-07-15 11:49:52.126049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.126057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.061 [2024-07-15 11:49:52.126061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.126067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.061 [2024-07-15 11:49:52.126071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.061 [2024-07-15 11:49:52.126076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.061 [2024-07-15 11:49:52.126083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.061 [2024-07-15 11:49:52.126085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:1[2024-07-15 11:49:52.126104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with [2024-07-15 11:49:52.126118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:24.062 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 [2024-07-15 11:49:52.126139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 [2024-07-15 11:49:52.126158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 [2024-07-15 11:49:52.126177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with [2024-07-15 11:49:52.126197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:1the state(5) to be set 00:23:24.062 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 [2024-07-15 11:49:52.126208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with [2024-07-15 11:49:52.126209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:24.062 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 [2024-07-15 11:49:52.126228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 [2024-07-15 11:49:52.126247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 [2024-07-15 11:49:52.126275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126284] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 [2024-07-15 11:49:52.126293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 [2024-07-15 11:49:52.126312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with [2024-07-15 11:49:52.126332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:1the state(5) to be set 00:23:24.062 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 [2024-07-15 11:49:52.126343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126355] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with [2024-07-15 11:49:52.126355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:1the state(5) to be set 00:23:24.062 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 [2024-07-15 11:49:52.126365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 [2024-07-15 11:49:52.126384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:1[2024-07-15 11:49:52.126404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 11:49:52.126415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 [2024-07-15 11:49:52.126435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 [2024-07-15 11:49:52.126454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 [2024-07-15 11:49:52.126472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850120 is same with the state(5) to be set 00:23:24.062 [2024-07-15 11:49:52.126481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 [2024-07-15 11:49:52.126501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 [2024-07-15 11:49:52.126520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 [2024-07-15 11:49:52.126540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 [2024-07-15 11:49:52.126561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 [2024-07-15 11:49:52.126581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.062 [2024-07-15 11:49:52.126600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.062 [2024-07-15 11:49:52.126612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.126621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.126632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.126641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.126651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.126661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.126671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.126680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.126691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.126700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.126712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.126721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.126732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.126741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.126752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.126761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.126772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.126781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.126791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.126800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.126812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.126821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.126836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.126846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.126856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.126865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.126876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.126885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.126896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.126905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.126916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.126925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.126937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.126946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.126957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.126967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.126977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.126986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.126997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.127006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.127017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.127026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.127036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.127045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.127056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.127067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.127078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.127087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.127097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.127106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.127117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.127126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.127125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.063 [2024-07-15 11:49:52.127139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.127143] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.063 [2024-07-15 11:49:52.127149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.127153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.063 [2024-07-15 11:49:52.127160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.127162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.063 [2024-07-15 11:49:52.127170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.127172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.063 [2024-07-15 11:49:52.127182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128[2024-07-15 11:49:52.127182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 the state(5) to be set 00:23:24.063 [2024-07-15 11:49:52.127193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 11:49:52.127194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 the state(5) to be set 00:23:24.063 [2024-07-15 11:49:52.127205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.063 [2024-07-15 11:49:52.127206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.127214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.063 [2024-07-15 11:49:52.127217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.063 [2024-07-15 11:49:52.127223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.063 [2024-07-15 11:49:52.127228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.063 [2024-07-15 11:49:52.127233] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.064 [2024-07-15 11:49:52.127242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128[2024-07-15 11:49:52.127253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.064 the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 11:49:52.127264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.064 the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127283] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127318] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with [2024-07-15 11:49:52.127334] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d00340 was disconnected and frthe state(5) to be set 00:23:24.064 eed. reset controller. 00:23:24.064 [2024-07-15 11:49:52.127346] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1[2024-07-15 11:49:52.127399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.064 the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with [2024-07-15 11:49:52.127411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:24.064 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.064 [2024-07-15 11:49:52.127421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.064 [2024-07-15 11:49:52.127431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.064 [2024-07-15 11:49:52.127441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:1[2024-07-15 11:49:52.127450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.064 the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 11:49:52.127462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.064 the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127473] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.064 [2024-07-15 11:49:52.127483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.064 [2024-07-15 11:49:52.127493] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.064 [2024-07-15 11:49:52.127503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.064 [2024-07-15 11:49:52.127512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.064 [2024-07-15 11:49:52.127521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.064 [2024-07-15 11:49:52.127531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:1[2024-07-15 11:49:52.127541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.064 the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 11:49:52.127552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.064 the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with [2024-07-15 11:49:52.127564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:1the state(5) to be set 00:23:24.064 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.064 [2024-07-15 11:49:52.127577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with [2024-07-15 11:49:52.127578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:24.064 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.064 [2024-07-15 11:49:52.127587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.064 [2024-07-15 11:49:52.127597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.064 [2024-07-15 11:49:52.127607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.064 [2024-07-15 11:49:52.127617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.064 [2024-07-15 11:49:52.127626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.064 [2024-07-15 11:49:52.127635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.064 [2024-07-15 11:49:52.127645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with [2024-07-15 11:49:52.127655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:1the state(5) to be set 00:23:24.064 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.064 [2024-07-15 11:49:52.127667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.064 [2024-07-15 11:49:52.127676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.064 [2024-07-15 11:49:52.127685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.064 [2024-07-15 11:49:52.127695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:1[2024-07-15 11:49:52.127704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.064 the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 11:49:52.127714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.064 the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.064 [2024-07-15 11:49:52.127735] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.064 [2024-07-15 11:49:52.127744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850490 is same with the state(5) to be set 00:23:24.064 [2024-07-15 11:49:52.127749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.064 [2024-07-15 11:49:52.127760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.064 [2024-07-15 11:49:52.127771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.064 [2024-07-15 11:49:52.127780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.064 [2024-07-15 11:49:52.127790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.064 [2024-07-15 11:49:52.127802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.064 [2024-07-15 11:49:52.127813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.064 [2024-07-15 11:49:52.127823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.064 [2024-07-15 11:49:52.127837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.065 [2024-07-15 11:49:52.127847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.065 [2024-07-15 11:49:52.127857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.065 [2024-07-15 11:49:52.127867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.065 [2024-07-15 11:49:52.127877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.065 [2024-07-15 11:49:52.127887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.065 [2024-07-15 11:49:52.127897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.065 [2024-07-15 11:49:52.127906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.065 [2024-07-15 11:49:52.127917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.065 [2024-07-15 11:49:52.127928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.065 [2024-07-15 11:49:52.128672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128876] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128920] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.128987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.129255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850930 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.130190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850dd0 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.130908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.130922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.130931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.130939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.130948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.130956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.130965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.130974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.130983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.130992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.131000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.131009] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.131018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.131026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.131035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.131043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.131054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.131063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.131073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.131081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.131090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.131099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.065 [2024-07-15 11:49:52.131107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131308] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131375] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.131460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851290 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132022] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.132257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.066 [2024-07-15 11:49:52.141487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.066 [2024-07-15 11:49:52.141504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.066 [2024-07-15 11:49:52.141519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.066 [2024-07-15 11:49:52.141531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.066 [2024-07-15 11:49:52.141545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.066 [2024-07-15 11:49:52.141558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.066 [2024-07-15 11:49:52.141572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.066 [2024-07-15 11:49:52.141584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.066 [2024-07-15 11:49:52.141601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.066 [2024-07-15 11:49:52.141613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.066 [2024-07-15 11:49:52.141628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.066 [2024-07-15 11:49:52.141640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.066 [2024-07-15 11:49:52.141654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.066 [2024-07-15 11:49:52.141667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.066 [2024-07-15 11:49:52.141681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.066 [2024-07-15 11:49:52.141693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.066 [2024-07-15 11:49:52.141707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.066 [2024-07-15 11:49:52.141719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.066 [2024-07-15 11:49:52.141734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.066 [2024-07-15 11:49:52.141746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.066 [2024-07-15 11:49:52.141761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.066 [2024-07-15 11:49:52.141773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.066 [2024-07-15 11:49:52.141787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.066 [2024-07-15 11:49:52.141800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.066 [2024-07-15 11:49:52.141814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.066 [2024-07-15 11:49:52.141826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.066 [2024-07-15 11:49:52.141844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.066 [2024-07-15 11:49:52.141856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.066 [2024-07-15 11:49:52.141871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.066 [2024-07-15 11:49:52.141883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.066 [2024-07-15 11:49:52.141897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.066 [2024-07-15 11:49:52.141909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.066 [2024-07-15 11:49:52.141923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.066 [2024-07-15 11:49:52.141937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.066 [2024-07-15 11:49:52.141952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.066 [2024-07-15 11:49:52.141964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.141978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.141990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.142004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.142017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.142031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.142043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.142057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.142069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.142083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.142096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.142110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.142122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.142136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.142148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.142163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.142176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.142190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.142202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.142216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.142228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.142243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.142255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.142271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.142283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.142297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.142310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.142324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.142337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.142351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.142363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.142377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.142389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.142403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.142416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.142430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.142442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.142456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.142468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.142482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.142495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.142509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.142521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.142535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.142548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.142618] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d017d0 was disconnected and freed. reset controller. 00:23:24.067 [2024-07-15 11:49:52.143223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.143252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.143277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.143290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.143304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.143316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.143331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.143344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.143358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.143370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.143384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.143396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.143410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.143422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.143436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.143448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.143463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.143475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.143489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.143501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.143515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.143527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.143541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.143553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.143567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.143579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.143593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.143608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.143622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.143634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.143648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.143660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.143674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.143686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.143700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.143712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.143726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.143738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.143753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.067 [2024-07-15 11:49:52.143764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.067 [2024-07-15 11:49:52.143779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.143792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.143806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.143818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.143848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.143861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.143875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.143888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.143902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.143914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.143928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.143940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.143956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.143969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.143983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.143995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.068 [2024-07-15 11:49:52.144959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.144991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:23:24.068 [2024-07-15 11:49:52.145049] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cb6880 was disconnected and freed. reset controller. 00:23:24.068 [2024-07-15 11:49:52.146336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:24.068 [2024-07-15 11:49:52.146391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e061d0 (9): Bad file descriptor 00:23:24.068 [2024-07-15 11:49:52.146438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.068 [2024-07-15 11:49:52.146454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.146467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.068 [2024-07-15 11:49:52.146480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.146493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.068 [2024-07-15 11:49:52.146506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.146518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.068 [2024-07-15 11:49:52.146531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.068 [2024-07-15 11:49:52.146543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b940 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.146572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.146586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.146599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.146611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.146623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.146636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.146648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.146660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.146673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df21f0 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.146708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.146722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.146735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.146751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.146764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.146776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.146789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.146801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.146813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca2a10 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.146852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.146866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.146879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.146891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.146904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.146916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.146929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.146941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.146953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cabb20 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.146993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.147007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.147020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.147032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.147045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.147057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.147070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.147082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.147094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d44180 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.147123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.147136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.147152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.147164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.147177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.147189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.147202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.147214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.147226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1782610 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.147258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.147272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.147284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.147297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.147310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.147322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.147335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.147347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.147359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d45ee0 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.147391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.147404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.148798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.148809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.148819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.148828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.148841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.148850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.148858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.148867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.148878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.148888] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.148897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.148906] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.148915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.148924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.148933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.148942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.148950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.148959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.148968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.148977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.148986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.148994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.149003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.149012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.149021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.149029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.149038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.149047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.149056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.149064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.149073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.149082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.149091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.149099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.149108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.149118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.149127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32e40 is same with the state(5) to be set 00:23:24.069 [2024-07-15 11:49:52.155359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.155376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.155389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.155403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.155416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.069 [2024-07-15 11:49:52.155429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.069 [2024-07-15 11:49:52.155441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7fc30 is same with the state(5) to be set 00:23:24.336 [2024-07-15 11:49:52.159375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:24.336 [2024-07-15 11:49:52.159411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:24.336 [2024-07-15 11:49:52.159431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cabb20 (9): Bad file descriptor 00:23:24.336 [2024-07-15 11:49:52.159451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b940 (9): Bad file descriptor 00:23:24.336 [2024-07-15 11:49:52.159497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df21f0 (9): Bad file descriptor 00:23:24.336 [2024-07-15 11:49:52.159518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca2a10 (9): Bad file descriptor 00:23:24.336 [2024-07-15 11:49:52.159563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.336 [2024-07-15 11:49:52.159579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.336 [2024-07-15 11:49:52.159594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.336 [2024-07-15 11:49:52.159606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.337 [2024-07-15 11:49:52.159620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.337 [2024-07-15 11:49:52.159633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.337 [2024-07-15 11:49:52.159646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:24.337 [2024-07-15 11:49:52.159659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.337 [2024-07-15 11:49:52.159672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2a150 is same with the state(5) to be set 00:23:24.337 [2024-07-15 11:49:52.159695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d44180 (9): Bad file descriptor 00:23:24.337 [2024-07-15 11:49:52.159715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1782610 (9): Bad file descriptor 00:23:24.337 [2024-07-15 11:49:52.159743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d45ee0 (9): Bad file descriptor 00:23:24.337 [2024-07-15 11:49:52.159767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7fc30 (9): Bad file descriptor 00:23:24.337 [2024-07-15 11:49:52.159844] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:24.337 [2024-07-15 11:49:52.160448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:24.337 [2024-07-15 11:49:52.160876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.337 [2024-07-15 11:49:52.160900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e061d0 with addr=10.0.0.2, port=4420 00:23:24.337 [2024-07-15 11:49:52.160915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e061d0 is same with the state(5) to be set 00:23:24.337 [2024-07-15 11:49:52.162336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.337 [2024-07-15 11:49:52.162367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b940 with addr=10.0.0.2, port=4420 00:23:24.337 [2024-07-15 11:49:52.162381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b940 is same with the state(5) to be set 00:23:24.337 [2024-07-15 11:49:52.162589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.337 [2024-07-15 11:49:52.162606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cabb20 with addr=10.0.0.2, port=4420 00:23:24.337 [2024-07-15 11:49:52.162618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cabb20 is same with the state(5) to be set 00:23:24.337 [2024-07-15 11:49:52.162926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.337 [2024-07-15 11:49:52.162943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44180 with addr=10.0.0.2, port=4420 00:23:24.337 [2024-07-15 11:49:52.162955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d44180 is same with the state(5) to be set 00:23:24.337 [2024-07-15 11:49:52.162971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e061d0 (9): Bad file descriptor 00:23:24.337 [2024-07-15 11:49:52.163055] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:24.337 [2024-07-15 11:49:52.163113] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:24.337 [2024-07-15 11:49:52.163167] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:24.337 [2024-07-15 11:49:52.163230] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:24.337 [2024-07-15 11:49:52.163312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b940 (9): Bad file descriptor 00:23:24.337 [2024-07-15 11:49:52.163332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cabb20 (9): Bad file descriptor 00:23:24.337 [2024-07-15 11:49:52.163348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d44180 (9): Bad file descriptor 00:23:24.337 [2024-07-15 11:49:52.163363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:24.337 [2024-07-15 11:49:52.163376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:24.337 [2024-07-15 11:49:52.163391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:24.337 [2024-07-15 11:49:52.163523] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:24.337 [2024-07-15 11:49:52.163544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.337 [2024-07-15 11:49:52.163556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:24.337 [2024-07-15 11:49:52.163568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:24.337 [2024-07-15 11:49:52.163585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:24.337 [2024-07-15 11:49:52.163601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:24.337 [2024-07-15 11:49:52.163613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:24.337 [2024-07-15 11:49:52.163625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:24.337 [2024-07-15 11:49:52.163641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:24.337 [2024-07-15 11:49:52.163653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:24.337 [2024-07-15 11:49:52.163665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:24.337 [2024-07-15 11:49:52.163734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.337 [2024-07-15 11:49:52.163748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.337 [2024-07-15 11:49:52.163759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.337 [2024-07-15 11:49:52.169422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2a150 (9): Bad file descriptor 00:23:24.337 [2024-07-15 11:49:52.169549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.337 [2024-07-15 11:49:52.169563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.337 [2024-07-15 11:49:52.169580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.337 [2024-07-15 11:49:52.169590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.337 [2024-07-15 11:49:52.169602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.337 [2024-07-15 11:49:52.169612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.337 [2024-07-15 11:49:52.169623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.337 [2024-07-15 11:49:52.169633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.337 [2024-07-15 11:49:52.169644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.337 [2024-07-15 11:49:52.169654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.337 [2024-07-15 11:49:52.169665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.337 [2024-07-15 11:49:52.169675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.337 [2024-07-15 11:49:52.169686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.337 [2024-07-15 11:49:52.169696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.337 [2024-07-15 11:49:52.169707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.337 [2024-07-15 11:49:52.169717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.337 [2024-07-15 11:49:52.169728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.337 [2024-07-15 11:49:52.169742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.337 [2024-07-15 11:49:52.169753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.337 [2024-07-15 11:49:52.169763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.337 [2024-07-15 11:49:52.169774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.337 [2024-07-15 11:49:52.169784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.337 [2024-07-15 11:49:52.169796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.337 [2024-07-15 11:49:52.169806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.337 [2024-07-15 11:49:52.169817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.337 [2024-07-15 11:49:52.169827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.337 [2024-07-15 11:49:52.169841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.337 [2024-07-15 11:49:52.169851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.337 [2024-07-15 11:49:52.169862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.337 [2024-07-15 11:49:52.169872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.337 [2024-07-15 11:49:52.169883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.337 [2024-07-15 11:49:52.169893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.337 [2024-07-15 11:49:52.169904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.337 [2024-07-15 11:49:52.169914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.337 [2024-07-15 11:49:52.169926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.337 [2024-07-15 11:49:52.169937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.337 [2024-07-15 11:49:52.169948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.337 [2024-07-15 11:49:52.169958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.337 [2024-07-15 11:49:52.169969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.337 [2024-07-15 11:49:52.169979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.169990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.169999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.338 [2024-07-15 11:49:52.170728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.338 [2024-07-15 11:49:52.170737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.170748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.170758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.170769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.170779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.170790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.170801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.170812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.170822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.170835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.170846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.170857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.170867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.170878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.170888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.170899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.170908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.170919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cff040 is same with the state(5) to be set 00:23:24.339 [2024-07-15 11:49:52.171967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.171983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.171998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.339 [2024-07-15 11:49:52.172560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.339 [2024-07-15 11:49:52.172571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.172580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.172591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.172602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.172613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.172623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.172636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.172646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.172657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.172668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.172679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.172689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.172700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.172710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.172721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.172730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.172741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.172751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.172762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.172772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.172784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.172793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.172805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.172814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.172826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.172839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.172851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.172860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.172871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.172881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.172892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.172902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.172913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.172923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.172936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.172946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.172958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.172967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.172978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.172988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.172999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.173009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.173020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.173030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.173041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.173051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.173062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.173071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.173082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.173092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.173103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.173113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.173125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.173135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.173146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.173156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.173167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.173177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.173188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.173199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.173210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.173220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.173231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.173241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.173252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.173262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.173272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.173282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.173293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.173303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.173314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.173324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.173335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.340 [2024-07-15 11:49:52.173345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.340 [2024-07-15 11:49:52.173356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d02cf0 is same with the state(5) to be set 00:23:24.341 [2024-07-15 11:49:52.174387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.174983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.174994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.175004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.175015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.175027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.175038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.175048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.341 [2024-07-15 11:49:52.175059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.341 [2024-07-15 11:49:52.175069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.175744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.175755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d04180 is same with the state(5) to be set 00:23:24.342 [2024-07-15 11:49:52.176789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.176805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.176820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.176830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.176845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.176855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.176870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.176879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.176891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.176900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.176912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.176921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.176933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.176942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.176953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.342 [2024-07-15 11:49:52.176963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.342 [2024-07-15 11:49:52.176975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.176985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.176996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.343 [2024-07-15 11:49:52.177851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.343 [2024-07-15 11:49:52.177860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.177872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.177881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.177893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.177902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.177914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.177923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.177936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.177945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.177957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.177966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.177977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.177987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.177998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.178008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.178019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.178029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.178040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.178050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.178062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.178072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.178083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.178093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.178105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.178114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.178126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.178136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.178147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.178156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.178166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7b3e0 is same with the state(5) to be set 00:23:24.344 [2024-07-15 11:49:52.179192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.344 [2024-07-15 11:49:52.179717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.344 [2024-07-15 11:49:52.179729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.179738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.179749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.179758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.179768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.179777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.179788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.179797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.179808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.179817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.179828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.179844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.179855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.179864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.179875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.179884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.179895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.179904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.179915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.179924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.179935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.179944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.179955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.179964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.179974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.179985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.179996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.180005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.180015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.180024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.180035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.180044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.180054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.180063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.180074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.180083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.180094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.180103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.180114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.180123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.180133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.180142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.180153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.180162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.180173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.180182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.180192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.180201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.180212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.180221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.180233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.180242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.180253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.180262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.180273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.180282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.180293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.180302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.180312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.180322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.180332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.180341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.180352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.180361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.180374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.180383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.180394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.180403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.345 [2024-07-15 11:49:52.180414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.345 [2024-07-15 11:49:52.180423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.180434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.180443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.180454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.180463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.180473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.180484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.180494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb7d30 is same with the state(5) to be set 00:23:24.346 [2024-07-15 11:49:52.181435] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:24.346 [2024-07-15 11:49:52.181453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:24.346 [2024-07-15 11:49:52.181466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:24.346 [2024-07-15 11:49:52.181477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:24.346 [2024-07-15 11:49:52.181552] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:24.346 [2024-07-15 11:49:52.181624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:24.346 [2024-07-15 11:49:52.182005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.346 [2024-07-15 11:49:52.182022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c7fc30 with addr=10.0.0.2, port=4420 00:23:24.346 [2024-07-15 11:49:52.182033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7fc30 is same with the state(5) to be set 00:23:24.346 [2024-07-15 11:49:52.182307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.346 [2024-07-15 11:49:52.182319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca2a10 with addr=10.0.0.2, port=4420 00:23:24.346 [2024-07-15 11:49:52.182328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca2a10 is same with the state(5) to be set 00:23:24.346 [2024-07-15 11:49:52.182647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.346 [2024-07-15 11:49:52.182659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d45ee0 with addr=10.0.0.2, port=4420 00:23:24.346 [2024-07-15 11:49:52.182668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d45ee0 is same with the state(5) to be set 00:23:24.346 [2024-07-15 11:49:52.183008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.346 [2024-07-15 11:49:52.183020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1782610 with addr=10.0.0.2, port=4420 00:23:24.346 [2024-07-15 11:49:52.183029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1782610 is same with the state(5) to be set 00:23:24.346 [2024-07-15 11:49:52.184133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.346 [2024-07-15 11:49:52.184718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.346 [2024-07-15 11:49:52.184729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.184740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.184750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.184759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.184770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.184779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.184789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.184798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.184809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.184818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.184829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.184842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.184853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.184862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.184873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.184882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.184893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.184902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.184913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.184922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.184932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.184941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.184952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.184961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.184972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.184981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.184993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.185003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.185013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.185022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.185032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.185042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.185053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.185062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.185072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.185081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.185092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.185101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.185111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.185120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.185131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.185140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.185151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.185160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.185170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.185179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.185190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.185199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.185209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.185219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.185229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.185240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.185250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.185259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.185270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.185279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.185290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.185299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.185310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.185319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.185330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.185339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.185349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.185358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.185369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.185378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.185389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.185398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.185409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:24.347 [2024-07-15 11:49:52.185418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:24.347 [2024-07-15 11:49:52.185428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df0750 is same with the state(5) to be set 00:23:24.347 [2024-07-15 11:49:52.187139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:24.347 [2024-07-15 11:49:52.187161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:24.347 [2024-07-15 11:49:52.187172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:24.347 [2024-07-15 11:49:52.187184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:24.347 task offset: 30720 on job bdev=Nvme10n1 fails 00:23:24.347 00:23:24.347 Latency(us) 00:23:24.347 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.347 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.347 Job: Nvme1n1 ended in about 0.89 seconds with error 00:23:24.347 Verification LBA range: start 0x0 length 0x400 00:23:24.347 Nvme1n1 : 0.89 215.40 13.46 71.80 0.00 220678.96 17511.22 208037.48 00:23:24.347 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.347 Job: Nvme2n1 ended in about 0.88 seconds with error 00:23:24.347 Verification LBA range: start 0x0 length 0x400 00:23:24.347 Nvme2n1 : 0.88 219.17 13.70 73.06 0.00 213082.52 18769.51 223136.97 00:23:24.347 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.347 Job: Nvme3n1 ended in about 0.88 seconds with error 00:23:24.347 Verification LBA range: start 0x0 length 0x400 00:23:24.347 Nvme3n1 : 0.88 218.88 13.68 72.96 0.00 209674.44 20342.37 216426.09 00:23:24.347 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.347 Job: Nvme4n1 ended in about 0.89 seconds with error 00:23:24.347 Verification LBA range: start 0x0 length 0x400 00:23:24.347 Nvme4n1 : 0.89 214.82 13.43 71.61 0.00 210131.76 18664.65 231525.58 00:23:24.347 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.347 Job: Nvme5n1 ended in about 0.90 seconds with error 00:23:24.347 Verification LBA range: start 0x0 length 0x400 00:23:24.348 Nvme5n1 : 0.90 214.24 13.39 71.41 0.00 206983.99 18559.80 192937.98 00:23:24.348 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.348 Job: Nvme6n1 ended in about 0.90 seconds with error 00:23:24.348 Verification LBA range: start 0x0 length 0x400 00:23:24.348 Nvme6n1 : 0.90 213.67 13.35 71.22 0.00 203853.62 18559.80 209715.20 00:23:24.348 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.348 Job: Nvme7n1 ended in about 0.88 seconds with error 00:23:24.348 Verification LBA range: start 0x0 length 0x400 00:23:24.348 Nvme7n1 : 0.88 218.54 13.66 72.85 0.00 195118.90 15099.49 209715.20 00:23:24.348 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.348 Job: Nvme8n1 ended in about 0.90 seconds with error 00:23:24.348 Verification LBA range: start 0x0 length 0x400 00:23:24.348 Nvme8n1 : 0.90 213.12 13.32 71.04 0.00 196940.60 16462.64 208037.48 00:23:24.348 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.348 Job: Nvme9n1 ended in about 0.91 seconds with error 00:23:24.348 Verification LBA range: start 0x0 length 0x400 00:23:24.348 Nvme9n1 : 0.91 141.31 8.83 70.65 0.00 259363.64 19188.94 260046.85 00:23:24.348 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.348 Job: Nvme10n1 ended in about 0.87 seconds with error 00:23:24.348 Verification LBA range: start 0x0 length 0x400 00:23:24.348 Nvme10n1 : 0.87 221.78 13.86 73.93 0.00 180781.88 16567.50 206359.76 00:23:24.348 =================================================================================================================== 00:23:24.348 Total : 2090.92 130.68 720.53 0.00 208386.60 15099.49 260046.85 00:23:24.348 [2024-07-15 11:49:52.210017] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:24.348 [2024-07-15 11:49:52.210057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:24.348 [2024-07-15 11:49:52.210490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.348 [2024-07-15 11:49:52.210510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1df21f0 with addr=10.0.0.2, port=4420 00:23:24.348 [2024-07-15 11:49:52.210523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df21f0 is same with the state(5) to be set 00:23:24.348 [2024-07-15 11:49:52.210539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7fc30 (9): Bad file descriptor 00:23:24.348 [2024-07-15 11:49:52.210555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca2a10 (9): Bad file descriptor 00:23:24.348 [2024-07-15 11:49:52.210571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d45ee0 (9): Bad file descriptor 00:23:24.348 [2024-07-15 11:49:52.210583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1782610 (9): Bad file descriptor 00:23:24.348 [2024-07-15 11:49:52.211030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.348 [2024-07-15 11:49:52.211046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e061d0 with addr=10.0.0.2, port=4420 00:23:24.348 [2024-07-15 11:49:52.211056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e061d0 is same with the state(5) to be set 00:23:24.348 [2024-07-15 11:49:52.211375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.348 [2024-07-15 11:49:52.211388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d44180 with addr=10.0.0.2, port=4420 00:23:24.348 [2024-07-15 11:49:52.211397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d44180 is same with the state(5) to be set 00:23:24.348 [2024-07-15 11:49:52.211656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.348 [2024-07-15 11:49:52.211669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cabb20 with addr=10.0.0.2, port=4420 00:23:24.348 [2024-07-15 11:49:52.211678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cabb20 is same with the state(5) to be set 00:23:24.348 [2024-07-15 11:49:52.211974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.348 [2024-07-15 11:49:52.211986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e4b940 with addr=10.0.0.2, port=4420 00:23:24.348 [2024-07-15 11:49:52.211995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4b940 is same with the state(5) to be set 00:23:24.348 [2024-07-15 11:49:52.212295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:24.348 [2024-07-15 11:49:52.212307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2a150 with addr=10.0.0.2, port=4420 00:23:24.348 [2024-07-15 11:49:52.212316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2a150 is same with the state(5) to be set 00:23:24.348 [2024-07-15 11:49:52.212327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df21f0 (9): Bad file descriptor 00:23:24.348 [2024-07-15 11:49:52.212339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:24.348 [2024-07-15 11:49:52.212348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:24.348 [2024-07-15 11:49:52.212359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:24.348 [2024-07-15 11:49:52.212376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:24.348 [2024-07-15 11:49:52.212385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:24.348 [2024-07-15 11:49:52.212395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:24.348 [2024-07-15 11:49:52.212406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:24.348 [2024-07-15 11:49:52.212415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:24.348 [2024-07-15 11:49:52.212423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:24.348 [2024-07-15 11:49:52.212434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:24.348 [2024-07-15 11:49:52.212443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:24.348 [2024-07-15 11:49:52.212452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:24.348 [2024-07-15 11:49:52.212491] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:24.348 [2024-07-15 11:49:52.212505] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:24.348 [2024-07-15 11:49:52.212517] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:24.348 [2024-07-15 11:49:52.212529] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:24.348 [2024-07-15 11:49:52.212541] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:24.348 [2024-07-15 11:49:52.212840] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.348 [2024-07-15 11:49:52.212852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.348 [2024-07-15 11:49:52.212860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.348 [2024-07-15 11:49:52.212867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.348 [2024-07-15 11:49:52.212878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e061d0 (9): Bad file descriptor 00:23:24.348 [2024-07-15 11:49:52.212889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d44180 (9): Bad file descriptor 00:23:24.348 [2024-07-15 11:49:52.212900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cabb20 (9): Bad file descriptor 00:23:24.348 [2024-07-15 11:49:52.212912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e4b940 (9): Bad file descriptor 00:23:24.348 [2024-07-15 11:49:52.212923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2a150 (9): Bad file descriptor 00:23:24.348 [2024-07-15 11:49:52.212933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:24.348 [2024-07-15 11:49:52.212942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:24.348 [2024-07-15 11:49:52.212950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:24.348 [2024-07-15 11:49:52.212989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.348 [2024-07-15 11:49:52.212999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:24.348 [2024-07-15 11:49:52.213007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:24.348 [2024-07-15 11:49:52.213016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:24.348 [2024-07-15 11:49:52.213026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:24.348 [2024-07-15 11:49:52.213035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:24.348 [2024-07-15 11:49:52.213044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:24.348 [2024-07-15 11:49:52.213054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:24.348 [2024-07-15 11:49:52.213062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:24.348 [2024-07-15 11:49:52.213071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:24.348 [2024-07-15 11:49:52.213081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:24.348 [2024-07-15 11:49:52.213090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:24.348 [2024-07-15 11:49:52.213098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:24.348 [2024-07-15 11:49:52.213111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:24.348 [2024-07-15 11:49:52.213119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:24.348 [2024-07-15 11:49:52.213128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:24.348 [2024-07-15 11:49:52.213157] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.348 [2024-07-15 11:49:52.213166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.348 [2024-07-15 11:49:52.213174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.348 [2024-07-15 11:49:52.213181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.348 [2024-07-15 11:49:52.213190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:24.608 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:24.608 11:49:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2038655 00:23:25.547 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2038655) - No such process 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:25.547 rmmod nvme_tcp 00:23:25.547 rmmod nvme_fabrics 00:23:25.547 rmmod nvme_keyring 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:25.547 11:49:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.084 11:49:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:28.084 00:23:28.084 real 0m8.142s 00:23:28.084 user 0m20.257s 00:23:28.084 sys 0m1.689s 00:23:28.084 11:49:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:28.084 11:49:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:28.084 ************************************ 00:23:28.084 END TEST nvmf_shutdown_tc3 00:23:28.084 ************************************ 00:23:28.085 11:49:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:28.085 11:49:55 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:28.085 00:23:28.085 real 0m33.056s 00:23:28.085 user 1m18.369s 00:23:28.085 sys 0m10.623s 00:23:28.085 11:49:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:28.085 11:49:55 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:28.085 ************************************ 00:23:28.085 END TEST nvmf_shutdown 00:23:28.085 ************************************ 00:23:28.085 11:49:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:28.085 11:49:55 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:23:28.085 11:49:55 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:28.085 11:49:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:28.085 11:49:55 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:23:28.085 11:49:55 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:28.085 11:49:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:28.085 11:49:55 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:23:28.085 11:49:55 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:28.085 11:49:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:28.085 11:49:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:28.085 11:49:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:28.085 ************************************ 00:23:28.085 START TEST nvmf_multicontroller 00:23:28.085 ************************************ 00:23:28.085 11:49:55 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:28.085 * Looking for test storage... 00:23:28.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:28.085 11:49:55 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:28.085 11:49:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:28.085 11:49:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:28.085 11:49:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:28.085 11:49:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:28.085 11:49:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:28.085 11:49:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:28.085 11:49:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:28.085 11:49:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:28.085 11:49:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:28.085 11:49:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:28.085 11:49:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:28.085 11:49:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:28.085 11:49:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:28.085 11:49:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:28.085 11:49:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:28.085 11:49:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:28.085 11:49:55 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:28.085 11:49:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:34.685 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:34.685 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:34.685 Found net devices under 0000:af:00.0: cvl_0_0 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:34.685 Found net devices under 0000:af:00.1: cvl_0_1 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:34.685 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:34.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:23:34.943 00:23:34.943 --- 10.0.0.2 ping statistics --- 00:23:34.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.943 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:34.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:23:34.943 00:23:34.943 --- 10.0.0.1 ping statistics --- 00:23:34.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.943 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2043151 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2043151 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2043151 ']' 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:34.943 11:50:02 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:34.943 [2024-07-15 11:50:02.957471] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:23:34.943 [2024-07-15 11:50:02.957526] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.943 EAL: No free 2048 kB hugepages reported on node 1 00:23:34.943 [2024-07-15 11:50:03.031184] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:35.201 [2024-07-15 11:50:03.101009] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.201 [2024-07-15 11:50:03.101049] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.201 [2024-07-15 11:50:03.101059] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.202 [2024-07-15 11:50:03.101068] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.202 [2024-07-15 11:50:03.101076] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.202 [2024-07-15 11:50:03.101137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.202 [2024-07-15 11:50:03.101158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.202 [2024-07-15 11:50:03.101159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.768 [2024-07-15 11:50:03.809210] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.768 Malloc0 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.768 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:35.768 [2024-07-15 11:50:03.869875] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.026 [2024-07-15 11:50:03.877791] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.026 Malloc1 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.026 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.027 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.027 11:50:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2043206 00:23:36.027 11:50:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:36.027 11:50:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:36.027 11:50:03 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2043206 /var/tmp/bdevperf.sock 00:23:36.027 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2043206 ']' 00:23:36.027 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:36.027 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:36.027 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:36.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:36.027 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:36.027 11:50:03 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.962 11:50:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:36.962 11:50:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:23:36.962 11:50:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:36.962 11:50:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.962 11:50:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.962 NVMe0n1 00:23:36.962 11:50:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.962 11:50:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:36.962 11:50:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:36.962 11:50:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.962 11:50:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.962 1 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.962 request: 00:23:36.962 { 00:23:36.962 "name": "NVMe0", 00:23:36.962 "trtype": "tcp", 00:23:36.962 "traddr": "10.0.0.2", 00:23:36.962 "adrfam": "ipv4", 00:23:36.962 "trsvcid": "4420", 00:23:36.962 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.962 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:36.962 "hostaddr": "10.0.0.2", 00:23:36.962 "hostsvcid": "60000", 00:23:36.962 "prchk_reftag": false, 00:23:36.962 "prchk_guard": false, 00:23:36.962 "hdgst": false, 00:23:36.962 "ddgst": false, 00:23:36.962 "method": "bdev_nvme_attach_controller", 00:23:36.962 "req_id": 1 00:23:36.962 } 00:23:36.962 Got JSON-RPC error response 00:23:36.962 response: 00:23:36.962 { 00:23:36.962 "code": -114, 00:23:36.962 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:36.962 } 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:36.962 request: 00:23:36.962 { 00:23:36.962 "name": "NVMe0", 00:23:36.962 "trtype": "tcp", 00:23:36.962 "traddr": "10.0.0.2", 00:23:36.962 "adrfam": "ipv4", 00:23:36.962 "trsvcid": "4420", 00:23:36.962 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:36.962 "hostaddr": "10.0.0.2", 00:23:36.962 "hostsvcid": "60000", 00:23:36.962 "prchk_reftag": false, 00:23:36.962 "prchk_guard": false, 00:23:36.962 "hdgst": false, 00:23:36.962 "ddgst": false, 00:23:36.962 "method": "bdev_nvme_attach_controller", 00:23:36.962 "req_id": 1 00:23:36.962 } 00:23:36.962 Got JSON-RPC error response 00:23:36.962 response: 00:23:36.962 { 00:23:36.962 "code": -114, 00:23:36.962 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:36.962 } 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:36.962 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:37.221 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.221 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:37.221 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.221 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.221 request: 00:23:37.221 { 00:23:37.221 "name": "NVMe0", 00:23:37.221 "trtype": "tcp", 00:23:37.221 "traddr": "10.0.0.2", 00:23:37.221 "adrfam": "ipv4", 00:23:37.221 "trsvcid": "4420", 00:23:37.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.222 "hostaddr": "10.0.0.2", 00:23:37.222 "hostsvcid": "60000", 00:23:37.222 "prchk_reftag": false, 00:23:37.222 "prchk_guard": false, 00:23:37.222 "hdgst": false, 00:23:37.222 "ddgst": false, 00:23:37.222 "multipath": "disable", 00:23:37.222 "method": "bdev_nvme_attach_controller", 00:23:37.222 "req_id": 1 00:23:37.222 } 00:23:37.222 Got JSON-RPC error response 00:23:37.222 response: 00:23:37.222 { 00:23:37.222 "code": -114, 00:23:37.222 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:37.222 } 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.222 request: 00:23:37.222 { 00:23:37.222 "name": "NVMe0", 00:23:37.222 "trtype": "tcp", 00:23:37.222 "traddr": "10.0.0.2", 00:23:37.222 "adrfam": "ipv4", 00:23:37.222 "trsvcid": "4420", 00:23:37.222 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.222 "hostaddr": "10.0.0.2", 00:23:37.222 "hostsvcid": "60000", 00:23:37.222 "prchk_reftag": false, 00:23:37.222 "prchk_guard": false, 00:23:37.222 "hdgst": false, 00:23:37.222 "ddgst": false, 00:23:37.222 "multipath": "failover", 00:23:37.222 "method": "bdev_nvme_attach_controller", 00:23:37.222 "req_id": 1 00:23:37.222 } 00:23:37.222 Got JSON-RPC error response 00:23:37.222 response: 00:23:37.222 { 00:23:37.222 "code": -114, 00:23:37.222 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:37.222 } 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.222 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.222 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:37.222 11:50:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:38.600 0 00:23:38.600 11:50:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:38.600 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.600 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:38.600 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.600 11:50:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2043206 00:23:38.600 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2043206 ']' 00:23:38.600 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2043206 00:23:38.600 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:38.600 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:38.600 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2043206 00:23:38.600 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:38.600 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:38.600 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2043206' 00:23:38.600 killing process with pid 2043206 00:23:38.600 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2043206 00:23:38.600 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2043206 00:23:38.600 11:50:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:38.600 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.600 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:38.600 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.600 11:50:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:38.600 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.600 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:38.860 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.860 11:50:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:38.860 11:50:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:38.860 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:38.860 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:38.860 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:38.860 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:38.860 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:38.860 [2024-07-15 11:50:03.983888] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:23:38.860 [2024-07-15 11:50:03.983940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2043206 ] 00:23:38.860 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.860 [2024-07-15 11:50:04.055613] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.860 [2024-07-15 11:50:04.131673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.860 [2024-07-15 11:50:05.297493] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 23d0e9e7-4048-440e-a1a6-f5eeaa1762f4 already exists 00:23:38.860 [2024-07-15 11:50:05.297525] bdev.c:7748:bdev_register: *ERROR*: Unable to add uuid:23d0e9e7-4048-440e-a1a6-f5eeaa1762f4 alias for bdev NVMe1n1 00:23:38.860 [2024-07-15 11:50:05.297536] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:38.860 Running I/O for 1 seconds... 00:23:38.860 00:23:38.860 Latency(us) 00:23:38.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.860 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:38.860 NVMe0n1 : 1.00 24567.16 95.97 0.00 0.00 5194.48 1546.65 7916.75 00:23:38.860 =================================================================================================================== 00:23:38.860 Total : 24567.16 95.97 0.00 0.00 5194.48 1546.65 7916.75 00:23:38.860 Received shutdown signal, test time was about 1.000000 seconds 00:23:38.860 00:23:38.860 Latency(us) 00:23:38.860 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:38.860 =================================================================================================================== 00:23:38.860 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:38.860 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:38.860 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:38.860 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:38.860 11:50:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:38.860 11:50:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:38.860 11:50:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:38.860 11:50:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:38.860 11:50:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:38.860 11:50:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:38.860 11:50:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:38.860 rmmod nvme_tcp 00:23:38.860 rmmod nvme_fabrics 00:23:38.860 rmmod nvme_keyring 00:23:38.860 11:50:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:38.860 11:50:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:38.860 11:50:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:38.860 11:50:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2043151 ']' 00:23:38.861 11:50:06 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2043151 00:23:38.861 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2043151 ']' 00:23:38.861 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2043151 00:23:38.861 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:23:38.861 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:38.861 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2043151 00:23:38.861 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:38.861 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:38.861 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2043151' 00:23:38.861 killing process with pid 2043151 00:23:38.861 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2043151 00:23:38.861 11:50:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2043151 00:23:39.120 11:50:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:39.120 11:50:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:39.120 11:50:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:39.120 11:50:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:39.120 11:50:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:39.120 11:50:07 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.120 11:50:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.120 11:50:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.653 11:50:09 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:41.653 00:23:41.653 real 0m13.244s 00:23:41.653 user 0m16.606s 00:23:41.653 sys 0m6.228s 00:23:41.653 11:50:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:41.653 11:50:09 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:41.653 ************************************ 00:23:41.653 END TEST nvmf_multicontroller 00:23:41.653 ************************************ 00:23:41.653 11:50:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:41.653 11:50:09 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:41.653 11:50:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:41.653 11:50:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:41.653 11:50:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:41.653 ************************************ 00:23:41.653 START TEST nvmf_aer 00:23:41.653 ************************************ 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:41.653 * Looking for test storage... 00:23:41.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:41.653 11:50:09 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:48.222 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:48.223 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:48.223 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:48.223 Found net devices under 0000:af:00.0: cvl_0_0 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:48.223 Found net devices under 0000:af:00.1: cvl_0_1 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:48.223 11:50:15 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:48.223 11:50:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:48.223 11:50:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:48.223 11:50:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:48.223 11:50:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:48.223 11:50:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:48.223 11:50:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:48.223 11:50:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:48.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:48.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:23:48.223 00:23:48.223 --- 10.0.0.2 ping statistics --- 00:23:48.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.223 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:23:48.223 11:50:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:48.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:48.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:23:48.223 00:23:48.223 --- 10.0.0.1 ping statistics --- 00:23:48.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.223 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:23:48.223 11:50:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:48.223 11:50:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:48.223 11:50:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:48.223 11:50:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:48.223 11:50:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:48.223 11:50:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:48.223 11:50:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:48.223 11:50:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:48.224 11:50:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:48.224 11:50:16 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:48.224 11:50:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:48.224 11:50:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:48.224 11:50:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.224 11:50:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2047411 00:23:48.224 11:50:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:48.224 11:50:16 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2047411 00:23:48.224 11:50:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2047411 ']' 00:23:48.224 11:50:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.224 11:50:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:48.224 11:50:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.224 11:50:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:48.224 11:50:16 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:48.482 [2024-07-15 11:50:16.330821] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:23:48.482 [2024-07-15 11:50:16.330873] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.482 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.482 [2024-07-15 11:50:16.404938] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:48.482 [2024-07-15 11:50:16.476037] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.482 [2024-07-15 11:50:16.476078] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.482 [2024-07-15 11:50:16.476087] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.482 [2024-07-15 11:50:16.476098] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.482 [2024-07-15 11:50:16.476105] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.482 [2024-07-15 11:50:16.476152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.482 [2024-07-15 11:50:16.476246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.482 [2024-07-15 11:50:16.476329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:48.482 [2024-07-15 11:50:16.476331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.090 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:49.090 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:23:49.090 11:50:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:49.090 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:49.090 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.090 11:50:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:49.090 11:50:17 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:49.090 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.090 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.090 [2024-07-15 11:50:17.189761] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.349 Malloc0 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.349 [2024-07-15 11:50:17.244388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.349 [ 00:23:49.349 { 00:23:49.349 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:49.349 "subtype": "Discovery", 00:23:49.349 "listen_addresses": [], 00:23:49.349 "allow_any_host": true, 00:23:49.349 "hosts": [] 00:23:49.349 }, 00:23:49.349 { 00:23:49.349 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.349 "subtype": "NVMe", 00:23:49.349 "listen_addresses": [ 00:23:49.349 { 00:23:49.349 "trtype": "TCP", 00:23:49.349 "adrfam": "IPv4", 00:23:49.349 "traddr": "10.0.0.2", 00:23:49.349 "trsvcid": "4420" 00:23:49.349 } 00:23:49.349 ], 00:23:49.349 "allow_any_host": true, 00:23:49.349 "hosts": [], 00:23:49.349 "serial_number": "SPDK00000000000001", 00:23:49.349 "model_number": "SPDK bdev Controller", 00:23:49.349 "max_namespaces": 2, 00:23:49.349 "min_cntlid": 1, 00:23:49.349 "max_cntlid": 65519, 00:23:49.349 "namespaces": [ 00:23:49.349 { 00:23:49.349 "nsid": 1, 00:23:49.349 "bdev_name": "Malloc0", 00:23:49.349 "name": "Malloc0", 00:23:49.349 "nguid": "E2F540827BF946FFAAACD89232026536", 00:23:49.349 "uuid": "e2f54082-7bf9-46ff-aaac-d89232026536" 00:23:49.349 } 00:23:49.349 ] 00:23:49.349 } 00:23:49.349 ] 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2047693 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:49.349 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:49.349 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:49.607 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:49.607 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:23:49.607 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:23:49.607 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:49.607 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:49.607 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:49.607 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:49.607 11:50:17 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:49.607 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.607 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.607 Malloc1 00:23:49.607 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.607 11:50:17 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:49.608 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.608 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.608 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.608 11:50:17 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:49.608 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.608 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.608 Asynchronous Event Request test 00:23:49.608 Attaching to 10.0.0.2 00:23:49.608 Attached to 10.0.0.2 00:23:49.608 Registering asynchronous event callbacks... 00:23:49.608 Starting namespace attribute notice tests for all controllers... 00:23:49.608 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:49.608 aer_cb - Changed Namespace 00:23:49.608 Cleaning up... 00:23:49.608 [ 00:23:49.608 { 00:23:49.608 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:49.608 "subtype": "Discovery", 00:23:49.608 "listen_addresses": [], 00:23:49.608 "allow_any_host": true, 00:23:49.608 "hosts": [] 00:23:49.608 }, 00:23:49.608 { 00:23:49.608 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.608 "subtype": "NVMe", 00:23:49.608 "listen_addresses": [ 00:23:49.608 { 00:23:49.608 "trtype": "TCP", 00:23:49.608 "adrfam": "IPv4", 00:23:49.608 "traddr": "10.0.0.2", 00:23:49.608 "trsvcid": "4420" 00:23:49.608 } 00:23:49.608 ], 00:23:49.608 "allow_any_host": true, 00:23:49.608 "hosts": [], 00:23:49.608 "serial_number": "SPDK00000000000001", 00:23:49.608 "model_number": "SPDK bdev Controller", 00:23:49.608 "max_namespaces": 2, 00:23:49.608 "min_cntlid": 1, 00:23:49.608 "max_cntlid": 65519, 00:23:49.608 "namespaces": [ 00:23:49.608 { 00:23:49.608 "nsid": 1, 00:23:49.608 "bdev_name": "Malloc0", 00:23:49.608 "name": "Malloc0", 00:23:49.608 "nguid": "E2F540827BF946FFAAACD89232026536", 00:23:49.608 "uuid": "e2f54082-7bf9-46ff-aaac-d89232026536" 00:23:49.608 }, 00:23:49.608 { 00:23:49.608 "nsid": 2, 00:23:49.608 "bdev_name": "Malloc1", 00:23:49.608 "name": "Malloc1", 00:23:49.608 "nguid": "AC34478A907244999467CF93F9BBEF72", 00:23:49.608 "uuid": "ac34478a-9072-4499-9467-cf93f9bbef72" 00:23:49.608 } 00:23:49.608 ] 00:23:49.608 } 00:23:49.608 ] 00:23:49.608 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.608 11:50:17 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2047693 00:23:49.608 11:50:17 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:49.608 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.608 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.608 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.608 11:50:17 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:49.608 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.608 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:49.866 rmmod nvme_tcp 00:23:49.866 rmmod nvme_fabrics 00:23:49.866 rmmod nvme_keyring 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2047411 ']' 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2047411 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2047411 ']' 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2047411 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2047411 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2047411' 00:23:49.866 killing process with pid 2047411 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2047411 00:23:49.866 11:50:17 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2047411 00:23:50.125 11:50:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:50.125 11:50:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:50.125 11:50:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:50.125 11:50:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:50.125 11:50:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:50.125 11:50:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.125 11:50:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.125 11:50:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.029 11:50:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:52.029 00:23:52.029 real 0m10.894s 00:23:52.029 user 0m8.047s 00:23:52.029 sys 0m5.829s 00:23:52.029 11:50:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:52.029 11:50:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:52.029 ************************************ 00:23:52.029 END TEST nvmf_aer 00:23:52.029 ************************************ 00:23:52.288 11:50:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:52.288 11:50:20 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:52.288 11:50:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:52.288 11:50:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:52.288 11:50:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:52.288 ************************************ 00:23:52.288 START TEST nvmf_async_init 00:23:52.288 ************************************ 00:23:52.288 11:50:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:52.288 * Looking for test storage... 00:23:52.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:52.288 11:50:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:52.288 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:52.288 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.288 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.288 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.288 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.288 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.288 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.288 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.288 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.288 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.288 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.288 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:23:52.288 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:23:52.288 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.288 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.288 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:52.288 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.288 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:52.288 11:50:20 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.288 11:50:20 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=c5a8eefff1fb4e1393bfc6f9ad210564 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:52.289 11:50:20 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:58.859 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:58.859 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:58.859 Found net devices under 0000:af:00.0: cvl_0_0 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:58.859 Found net devices under 0000:af:00.1: cvl_0_1 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:58.859 11:50:25 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:58.859 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:58.859 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:58.859 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:58.859 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:58.859 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:58.859 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:58.859 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:58.859 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:58.859 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:58.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:58.860 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:23:58.860 00:23:58.860 --- 10.0.0.2 ping statistics --- 00:23:58.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.860 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:58.860 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:58.860 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:23:58.860 00:23:58.860 --- 10.0.0.1 ping statistics --- 00:23:58.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:58.860 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2051262 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2051262 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2051262 ']' 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:58.860 11:50:26 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:58.860 [2024-07-15 11:50:26.378341] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:23:58.860 [2024-07-15 11:50:26.378414] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.860 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.860 [2024-07-15 11:50:26.452767] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.860 [2024-07-15 11:50:26.524810] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.860 [2024-07-15 11:50:26.524855] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.860 [2024-07-15 11:50:26.524865] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.860 [2024-07-15 11:50:26.524873] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.860 [2024-07-15 11:50:26.524880] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.860 [2024-07-15 11:50:26.524901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.119 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:59.119 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:23:59.119 11:50:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:59.119 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:59.119 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:59.119 11:50:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.119 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:59.119 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.119 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:59.119 [2024-07-15 11:50:27.199254] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.119 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.119 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:59.119 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.119 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:59.119 null0 00:23:59.119 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.119 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:59.119 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.119 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:59.119 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.119 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:59.119 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.119 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:59.378 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.378 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g c5a8eefff1fb4e1393bfc6f9ad210564 00:23:59.378 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.378 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:59.378 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.378 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:59.378 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.378 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:59.378 [2024-07-15 11:50:27.239465] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.378 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.378 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:59.378 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.378 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:59.378 nvme0n1 00:23:59.378 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.378 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:59.378 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.378 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:59.378 [ 00:23:59.378 { 00:23:59.378 "name": "nvme0n1", 00:23:59.378 "aliases": [ 00:23:59.378 "c5a8eeff-f1fb-4e13-93bf-c6f9ad210564" 00:23:59.378 ], 00:23:59.378 "product_name": "NVMe disk", 00:23:59.378 "block_size": 512, 00:23:59.378 "num_blocks": 2097152, 00:23:59.378 "uuid": "c5a8eeff-f1fb-4e13-93bf-c6f9ad210564", 00:23:59.378 "assigned_rate_limits": { 00:23:59.378 "rw_ios_per_sec": 0, 00:23:59.378 "rw_mbytes_per_sec": 0, 00:23:59.378 "r_mbytes_per_sec": 0, 00:23:59.378 "w_mbytes_per_sec": 0 00:23:59.378 }, 00:23:59.378 "claimed": false, 00:23:59.378 "zoned": false, 00:23:59.378 "supported_io_types": { 00:23:59.378 "read": true, 00:23:59.378 "write": true, 00:23:59.378 "unmap": false, 00:23:59.378 "flush": true, 00:23:59.378 "reset": true, 00:23:59.378 "nvme_admin": true, 00:23:59.378 "nvme_io": true, 00:23:59.378 "nvme_io_md": false, 00:23:59.378 "write_zeroes": true, 00:23:59.378 "zcopy": false, 00:23:59.378 "get_zone_info": false, 00:23:59.378 "zone_management": false, 00:23:59.378 "zone_append": false, 00:23:59.378 "compare": true, 00:23:59.378 "compare_and_write": true, 00:23:59.378 "abort": true, 00:23:59.378 "seek_hole": false, 00:23:59.378 "seek_data": false, 00:23:59.378 "copy": true, 00:23:59.378 "nvme_iov_md": false 00:23:59.378 }, 00:23:59.378 "memory_domains": [ 00:23:59.378 { 00:23:59.378 "dma_device_id": "system", 00:23:59.638 "dma_device_type": 1 00:23:59.638 } 00:23:59.638 ], 00:23:59.638 "driver_specific": { 00:23:59.638 "nvme": [ 00:23:59.638 { 00:23:59.638 "trid": { 00:23:59.638 "trtype": "TCP", 00:23:59.638 "adrfam": "IPv4", 00:23:59.638 "traddr": "10.0.0.2", 00:23:59.638 "trsvcid": "4420", 00:23:59.638 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:59.638 }, 00:23:59.638 "ctrlr_data": { 00:23:59.638 "cntlid": 1, 00:23:59.638 "vendor_id": "0x8086", 00:23:59.638 "model_number": "SPDK bdev Controller", 00:23:59.638 "serial_number": "00000000000000000000", 00:23:59.638 "firmware_revision": "24.09", 00:23:59.638 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:59.638 "oacs": { 00:23:59.638 "security": 0, 00:23:59.638 "format": 0, 00:23:59.638 "firmware": 0, 00:23:59.638 "ns_manage": 0 00:23:59.638 }, 00:23:59.638 "multi_ctrlr": true, 00:23:59.638 "ana_reporting": false 00:23:59.638 }, 00:23:59.638 "vs": { 00:23:59.638 "nvme_version": "1.3" 00:23:59.638 }, 00:23:59.638 "ns_data": { 00:23:59.638 "id": 1, 00:23:59.638 "can_share": true 00:23:59.638 } 00:23:59.638 } 00:23:59.638 ], 00:23:59.638 "mp_policy": "active_passive" 00:23:59.638 } 00:23:59.638 } 00:23:59.638 ] 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:59.638 [2024-07-15 11:50:27.491997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:59.638 [2024-07-15 11:50:27.492053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154ae10 (9): Bad file descriptor 00:23:59.638 [2024-07-15 11:50:27.623925] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:59.638 [ 00:23:59.638 { 00:23:59.638 "name": "nvme0n1", 00:23:59.638 "aliases": [ 00:23:59.638 "c5a8eeff-f1fb-4e13-93bf-c6f9ad210564" 00:23:59.638 ], 00:23:59.638 "product_name": "NVMe disk", 00:23:59.638 "block_size": 512, 00:23:59.638 "num_blocks": 2097152, 00:23:59.638 "uuid": "c5a8eeff-f1fb-4e13-93bf-c6f9ad210564", 00:23:59.638 "assigned_rate_limits": { 00:23:59.638 "rw_ios_per_sec": 0, 00:23:59.638 "rw_mbytes_per_sec": 0, 00:23:59.638 "r_mbytes_per_sec": 0, 00:23:59.638 "w_mbytes_per_sec": 0 00:23:59.638 }, 00:23:59.638 "claimed": false, 00:23:59.638 "zoned": false, 00:23:59.638 "supported_io_types": { 00:23:59.638 "read": true, 00:23:59.638 "write": true, 00:23:59.638 "unmap": false, 00:23:59.638 "flush": true, 00:23:59.638 "reset": true, 00:23:59.638 "nvme_admin": true, 00:23:59.638 "nvme_io": true, 00:23:59.638 "nvme_io_md": false, 00:23:59.638 "write_zeroes": true, 00:23:59.638 "zcopy": false, 00:23:59.638 "get_zone_info": false, 00:23:59.638 "zone_management": false, 00:23:59.638 "zone_append": false, 00:23:59.638 "compare": true, 00:23:59.638 "compare_and_write": true, 00:23:59.638 "abort": true, 00:23:59.638 "seek_hole": false, 00:23:59.638 "seek_data": false, 00:23:59.638 "copy": true, 00:23:59.638 "nvme_iov_md": false 00:23:59.638 }, 00:23:59.638 "memory_domains": [ 00:23:59.638 { 00:23:59.638 "dma_device_id": "system", 00:23:59.638 "dma_device_type": 1 00:23:59.638 } 00:23:59.638 ], 00:23:59.638 "driver_specific": { 00:23:59.638 "nvme": [ 00:23:59.638 { 00:23:59.638 "trid": { 00:23:59.638 "trtype": "TCP", 00:23:59.638 "adrfam": "IPv4", 00:23:59.638 "traddr": "10.0.0.2", 00:23:59.638 "trsvcid": "4420", 00:23:59.638 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:59.638 }, 00:23:59.638 "ctrlr_data": { 00:23:59.638 "cntlid": 2, 00:23:59.638 "vendor_id": "0x8086", 00:23:59.638 "model_number": "SPDK bdev Controller", 00:23:59.638 "serial_number": "00000000000000000000", 00:23:59.638 "firmware_revision": "24.09", 00:23:59.638 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:59.638 "oacs": { 00:23:59.638 "security": 0, 00:23:59.638 "format": 0, 00:23:59.638 "firmware": 0, 00:23:59.638 "ns_manage": 0 00:23:59.638 }, 00:23:59.638 "multi_ctrlr": true, 00:23:59.638 "ana_reporting": false 00:23:59.638 }, 00:23:59.638 "vs": { 00:23:59.638 "nvme_version": "1.3" 00:23:59.638 }, 00:23:59.638 "ns_data": { 00:23:59.638 "id": 1, 00:23:59.638 "can_share": true 00:23:59.638 } 00:23:59.638 } 00:23:59.638 ], 00:23:59.638 "mp_policy": "active_passive" 00:23:59.638 } 00:23:59.638 } 00:23:59.638 ] 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.PmbXNWiuo8 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.PmbXNWiuo8 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:59.638 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.639 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:59.639 [2024-07-15 11:50:27.676570] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:59.639 [2024-07-15 11:50:27.676686] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:59.639 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.639 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PmbXNWiuo8 00:23:59.639 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.639 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:59.639 [2024-07-15 11:50:27.684588] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:59.639 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.639 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PmbXNWiuo8 00:23:59.639 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.639 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:59.639 [2024-07-15 11:50:27.692620] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:59.639 [2024-07-15 11:50:27.692662] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:59.899 nvme0n1 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:59.899 [ 00:23:59.899 { 00:23:59.899 "name": "nvme0n1", 00:23:59.899 "aliases": [ 00:23:59.899 "c5a8eeff-f1fb-4e13-93bf-c6f9ad210564" 00:23:59.899 ], 00:23:59.899 "product_name": "NVMe disk", 00:23:59.899 "block_size": 512, 00:23:59.899 "num_blocks": 2097152, 00:23:59.899 "uuid": "c5a8eeff-f1fb-4e13-93bf-c6f9ad210564", 00:23:59.899 "assigned_rate_limits": { 00:23:59.899 "rw_ios_per_sec": 0, 00:23:59.899 "rw_mbytes_per_sec": 0, 00:23:59.899 "r_mbytes_per_sec": 0, 00:23:59.899 "w_mbytes_per_sec": 0 00:23:59.899 }, 00:23:59.899 "claimed": false, 00:23:59.899 "zoned": false, 00:23:59.899 "supported_io_types": { 00:23:59.899 "read": true, 00:23:59.899 "write": true, 00:23:59.899 "unmap": false, 00:23:59.899 "flush": true, 00:23:59.899 "reset": true, 00:23:59.899 "nvme_admin": true, 00:23:59.899 "nvme_io": true, 00:23:59.899 "nvme_io_md": false, 00:23:59.899 "write_zeroes": true, 00:23:59.899 "zcopy": false, 00:23:59.899 "get_zone_info": false, 00:23:59.899 "zone_management": false, 00:23:59.899 "zone_append": false, 00:23:59.899 "compare": true, 00:23:59.899 "compare_and_write": true, 00:23:59.899 "abort": true, 00:23:59.899 "seek_hole": false, 00:23:59.899 "seek_data": false, 00:23:59.899 "copy": true, 00:23:59.899 "nvme_iov_md": false 00:23:59.899 }, 00:23:59.899 "memory_domains": [ 00:23:59.899 { 00:23:59.899 "dma_device_id": "system", 00:23:59.899 "dma_device_type": 1 00:23:59.899 } 00:23:59.899 ], 00:23:59.899 "driver_specific": { 00:23:59.899 "nvme": [ 00:23:59.899 { 00:23:59.899 "trid": { 00:23:59.899 "trtype": "TCP", 00:23:59.899 "adrfam": "IPv4", 00:23:59.899 "traddr": "10.0.0.2", 00:23:59.899 "trsvcid": "4421", 00:23:59.899 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:59.899 }, 00:23:59.899 "ctrlr_data": { 00:23:59.899 "cntlid": 3, 00:23:59.899 "vendor_id": "0x8086", 00:23:59.899 "model_number": "SPDK bdev Controller", 00:23:59.899 "serial_number": "00000000000000000000", 00:23:59.899 "firmware_revision": "24.09", 00:23:59.899 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:59.899 "oacs": { 00:23:59.899 "security": 0, 00:23:59.899 "format": 0, 00:23:59.899 "firmware": 0, 00:23:59.899 "ns_manage": 0 00:23:59.899 }, 00:23:59.899 "multi_ctrlr": true, 00:23:59.899 "ana_reporting": false 00:23:59.899 }, 00:23:59.899 "vs": { 00:23:59.899 "nvme_version": "1.3" 00:23:59.899 }, 00:23:59.899 "ns_data": { 00:23:59.899 "id": 1, 00:23:59.899 "can_share": true 00:23:59.899 } 00:23:59.899 } 00:23:59.899 ], 00:23:59.899 "mp_policy": "active_passive" 00:23:59.899 } 00:23:59.899 } 00:23:59.899 ] 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.PmbXNWiuo8 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:59.899 rmmod nvme_tcp 00:23:59.899 rmmod nvme_fabrics 00:23:59.899 rmmod nvme_keyring 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2051262 ']' 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2051262 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2051262 ']' 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2051262 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2051262 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2051262' 00:23:59.899 killing process with pid 2051262 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2051262 00:23:59.899 [2024-07-15 11:50:27.915938] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:59.899 [2024-07-15 11:50:27.915962] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:59.899 11:50:27 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2051262 00:24:00.159 11:50:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:00.159 11:50:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:00.159 11:50:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:00.159 11:50:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:00.159 11:50:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:00.159 11:50:28 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.159 11:50:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:00.160 11:50:28 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.131 11:50:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:02.131 00:24:02.131 real 0m9.952s 00:24:02.131 user 0m3.369s 00:24:02.131 sys 0m4.888s 00:24:02.131 11:50:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:02.131 11:50:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:02.131 ************************************ 00:24:02.131 END TEST nvmf_async_init 00:24:02.131 ************************************ 00:24:02.131 11:50:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:02.131 11:50:30 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:02.131 11:50:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:02.131 11:50:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:02.131 11:50:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:02.400 ************************************ 00:24:02.400 START TEST dma 00:24:02.400 ************************************ 00:24:02.400 11:50:30 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:02.400 * Looking for test storage... 00:24:02.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:02.400 11:50:30 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:02.400 11:50:30 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.400 11:50:30 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.400 11:50:30 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.400 11:50:30 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.400 11:50:30 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.400 11:50:30 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.400 11:50:30 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:24:02.400 11:50:30 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:02.400 11:50:30 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:02.400 11:50:30 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:02.400 11:50:30 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:24:02.400 00:24:02.400 real 0m0.127s 00:24:02.400 user 0m0.052s 00:24:02.400 sys 0m0.086s 00:24:02.400 11:50:30 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:02.400 11:50:30 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:24:02.400 ************************************ 00:24:02.400 END TEST dma 00:24:02.400 ************************************ 00:24:02.400 11:50:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:02.400 11:50:30 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:02.400 11:50:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:02.400 11:50:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:02.400 11:50:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:02.400 ************************************ 00:24:02.400 START TEST nvmf_identify 00:24:02.400 ************************************ 00:24:02.400 11:50:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:02.660 * Looking for test storage... 00:24:02.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:02.660 11:50:30 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:09.231 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:09.231 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:09.231 Found net devices under 0000:af:00.0: cvl_0_0 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:09.231 Found net devices under 0000:af:00.1: cvl_0_1 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:09.231 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.232 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.232 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:09.232 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.232 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.232 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:09.232 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:09.232 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.232 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.232 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.232 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.232 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:09.232 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.232 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:09.232 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:09.232 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:09.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:24:09.232 00:24:09.232 --- 10.0.0.2 ping statistics --- 00:24:09.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.232 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:24:09.232 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:09.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.248 ms 00:24:09.491 00:24:09.491 --- 10.0.0.1 ping statistics --- 00:24:09.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.491 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:24:09.491 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.491 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:09.491 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:09.491 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.491 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:09.491 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:09.491 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.491 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:09.491 11:50:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:09.491 11:50:37 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:09.491 11:50:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:09.491 11:50:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.491 11:50:37 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2055215 00:24:09.491 11:50:37 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:09.491 11:50:37 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:09.491 11:50:37 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2055215 00:24:09.491 11:50:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2055215 ']' 00:24:09.491 11:50:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.491 11:50:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:09.491 11:50:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.491 11:50:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:09.491 11:50:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:09.491 [2024-07-15 11:50:37.430292] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:24:09.491 [2024-07-15 11:50:37.430345] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.491 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.491 [2024-07-15 11:50:37.508225] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:09.491 [2024-07-15 11:50:37.583091] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.491 [2024-07-15 11:50:37.583133] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.491 [2024-07-15 11:50:37.583142] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.491 [2024-07-15 11:50:37.583150] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.491 [2024-07-15 11:50:37.583173] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.491 [2024-07-15 11:50:37.583228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.491 [2024-07-15 11:50:37.583337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.491 [2024-07-15 11:50:37.583425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:09.491 [2024-07-15 11:50:37.583426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:10.429 [2024-07-15 11:50:38.229404] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:10.429 Malloc0 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:10.429 [2024-07-15 11:50:38.324285] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:10.429 [ 00:24:10.429 { 00:24:10.429 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:10.429 "subtype": "Discovery", 00:24:10.429 "listen_addresses": [ 00:24:10.429 { 00:24:10.429 "trtype": "TCP", 00:24:10.429 "adrfam": "IPv4", 00:24:10.429 "traddr": "10.0.0.2", 00:24:10.429 "trsvcid": "4420" 00:24:10.429 } 00:24:10.429 ], 00:24:10.429 "allow_any_host": true, 00:24:10.429 "hosts": [] 00:24:10.429 }, 00:24:10.429 { 00:24:10.429 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.429 "subtype": "NVMe", 00:24:10.429 "listen_addresses": [ 00:24:10.429 { 00:24:10.429 "trtype": "TCP", 00:24:10.429 "adrfam": "IPv4", 00:24:10.429 "traddr": "10.0.0.2", 00:24:10.429 "trsvcid": "4420" 00:24:10.429 } 00:24:10.429 ], 00:24:10.429 "allow_any_host": true, 00:24:10.429 "hosts": [], 00:24:10.429 "serial_number": "SPDK00000000000001", 00:24:10.429 "model_number": "SPDK bdev Controller", 00:24:10.429 "max_namespaces": 32, 00:24:10.429 "min_cntlid": 1, 00:24:10.429 "max_cntlid": 65519, 00:24:10.429 "namespaces": [ 00:24:10.429 { 00:24:10.429 "nsid": 1, 00:24:10.429 "bdev_name": "Malloc0", 00:24:10.429 "name": "Malloc0", 00:24:10.429 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:10.429 "eui64": "ABCDEF0123456789", 00:24:10.429 "uuid": "fb2cec54-cbdc-4cea-8088-34d13b2a67cf" 00:24:10.429 } 00:24:10.429 ] 00:24:10.429 } 00:24:10.429 ] 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.429 11:50:38 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:10.429 [2024-07-15 11:50:38.382757] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:24:10.429 [2024-07-15 11:50:38.382799] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2055410 ] 00:24:10.429 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.429 [2024-07-15 11:50:38.414222] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:10.429 [2024-07-15 11:50:38.414289] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:10.429 [2024-07-15 11:50:38.414296] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:10.429 [2024-07-15 11:50:38.414311] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:10.429 [2024-07-15 11:50:38.414318] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:10.430 [2024-07-15 11:50:38.414761] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:10.430 [2024-07-15 11:50:38.414792] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1fcaf00 0 00:24:10.430 [2024-07-15 11:50:38.428841] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:10.430 [2024-07-15 11:50:38.428852] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:10.430 [2024-07-15 11:50:38.428858] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:10.430 [2024-07-15 11:50:38.428862] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:10.430 [2024-07-15 11:50:38.428901] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.428908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.428913] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fcaf00) 00:24:10.430 [2024-07-15 11:50:38.428926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:10.430 [2024-07-15 11:50:38.428942] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2035e40, cid 0, qid 0 00:24:10.430 [2024-07-15 11:50:38.436843] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.430 [2024-07-15 11:50:38.436852] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.430 [2024-07-15 11:50:38.436857] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.436863] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2035e40) on tqpair=0x1fcaf00 00:24:10.430 [2024-07-15 11:50:38.436877] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:10.430 [2024-07-15 11:50:38.436884] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:10.430 [2024-07-15 11:50:38.436891] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:10.430 [2024-07-15 11:50:38.436906] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.436911] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.436916] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fcaf00) 00:24:10.430 [2024-07-15 11:50:38.436923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.430 [2024-07-15 11:50:38.436937] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2035e40, cid 0, qid 0 00:24:10.430 [2024-07-15 11:50:38.437181] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.430 [2024-07-15 11:50:38.437188] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.430 [2024-07-15 11:50:38.437193] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.437198] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2035e40) on tqpair=0x1fcaf00 00:24:10.430 [2024-07-15 11:50:38.437205] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:10.430 [2024-07-15 11:50:38.437214] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:10.430 [2024-07-15 11:50:38.437222] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.437227] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.437232] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fcaf00) 00:24:10.430 [2024-07-15 11:50:38.437239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.430 [2024-07-15 11:50:38.437251] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2035e40, cid 0, qid 0 00:24:10.430 [2024-07-15 11:50:38.437339] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.430 [2024-07-15 11:50:38.437346] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.430 [2024-07-15 11:50:38.437350] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.437355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2035e40) on tqpair=0x1fcaf00 00:24:10.430 [2024-07-15 11:50:38.437362] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:10.430 [2024-07-15 11:50:38.437371] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:10.430 [2024-07-15 11:50:38.437378] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.437383] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.437388] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fcaf00) 00:24:10.430 [2024-07-15 11:50:38.437395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.430 [2024-07-15 11:50:38.437406] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2035e40, cid 0, qid 0 00:24:10.430 [2024-07-15 11:50:38.437513] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.430 [2024-07-15 11:50:38.437520] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.430 [2024-07-15 11:50:38.437525] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.437529] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2035e40) on tqpair=0x1fcaf00 00:24:10.430 [2024-07-15 11:50:38.437535] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:10.430 [2024-07-15 11:50:38.437546] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.437551] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.437556] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fcaf00) 00:24:10.430 [2024-07-15 11:50:38.437563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.430 [2024-07-15 11:50:38.437575] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2035e40, cid 0, qid 0 00:24:10.430 [2024-07-15 11:50:38.437714] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.430 [2024-07-15 11:50:38.437721] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.430 [2024-07-15 11:50:38.437726] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.437730] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2035e40) on tqpair=0x1fcaf00 00:24:10.430 [2024-07-15 11:50:38.437736] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:10.430 [2024-07-15 11:50:38.437742] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:10.430 [2024-07-15 11:50:38.437752] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:10.430 [2024-07-15 11:50:38.437859] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:10.430 [2024-07-15 11:50:38.437865] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:10.430 [2024-07-15 11:50:38.437875] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.437880] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.437885] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fcaf00) 00:24:10.430 [2024-07-15 11:50:38.437892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.430 [2024-07-15 11:50:38.437904] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2035e40, cid 0, qid 0 00:24:10.430 [2024-07-15 11:50:38.437999] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.430 [2024-07-15 11:50:38.438006] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.430 [2024-07-15 11:50:38.438010] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.438015] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2035e40) on tqpair=0x1fcaf00 00:24:10.430 [2024-07-15 11:50:38.438021] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:10.430 [2024-07-15 11:50:38.438031] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.438036] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.438041] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fcaf00) 00:24:10.430 [2024-07-15 11:50:38.438048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.430 [2024-07-15 11:50:38.438061] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2035e40, cid 0, qid 0 00:24:10.430 [2024-07-15 11:50:38.438148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.430 [2024-07-15 11:50:38.438155] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.430 [2024-07-15 11:50:38.438159] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.438164] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2035e40) on tqpair=0x1fcaf00 00:24:10.430 [2024-07-15 11:50:38.438170] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:10.430 [2024-07-15 11:50:38.438176] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:10.430 [2024-07-15 11:50:38.438185] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:10.430 [2024-07-15 11:50:38.438195] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:10.430 [2024-07-15 11:50:38.438206] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.438211] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fcaf00) 00:24:10.430 [2024-07-15 11:50:38.438218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.430 [2024-07-15 11:50:38.438230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2035e40, cid 0, qid 0 00:24:10.430 [2024-07-15 11:50:38.438358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.430 [2024-07-15 11:50:38.438366] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.430 [2024-07-15 11:50:38.438370] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.438375] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fcaf00): datao=0, datal=4096, cccid=0 00:24:10.430 [2024-07-15 11:50:38.438381] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2035e40) on tqpair(0x1fcaf00): expected_datao=0, payload_size=4096 00:24:10.430 [2024-07-15 11:50:38.438387] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.438396] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.438401] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.438440] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.430 [2024-07-15 11:50:38.438447] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.430 [2024-07-15 11:50:38.438451] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.438456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2035e40) on tqpair=0x1fcaf00 00:24:10.430 [2024-07-15 11:50:38.438465] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:10.430 [2024-07-15 11:50:38.438474] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:10.430 [2024-07-15 11:50:38.438480] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:10.430 [2024-07-15 11:50:38.438487] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:10.430 [2024-07-15 11:50:38.438493] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:10.430 [2024-07-15 11:50:38.438499] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:10.430 [2024-07-15 11:50:38.438509] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:10.430 [2024-07-15 11:50:38.438519] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.438524] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.438529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fcaf00) 00:24:10.430 [2024-07-15 11:50:38.438537] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:10.430 [2024-07-15 11:50:38.438550] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2035e40, cid 0, qid 0 00:24:10.430 [2024-07-15 11:50:38.438665] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.430 [2024-07-15 11:50:38.438672] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.430 [2024-07-15 11:50:38.438677] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.438682] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2035e40) on tqpair=0x1fcaf00 00:24:10.430 [2024-07-15 11:50:38.438690] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.438695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.438699] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fcaf00) 00:24:10.430 [2024-07-15 11:50:38.438706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.430 [2024-07-15 11:50:38.438713] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.438718] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.438723] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1fcaf00) 00:24:10.430 [2024-07-15 11:50:38.438729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.430 [2024-07-15 11:50:38.438736] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.438741] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.438745] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1fcaf00) 00:24:10.430 [2024-07-15 11:50:38.438752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.430 [2024-07-15 11:50:38.438759] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.438763] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.438768] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.430 [2024-07-15 11:50:38.438774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.430 [2024-07-15 11:50:38.438780] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:10.430 [2024-07-15 11:50:38.438793] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:10.430 [2024-07-15 11:50:38.438801] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.438805] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fcaf00) 00:24:10.430 [2024-07-15 11:50:38.438812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.430 [2024-07-15 11:50:38.438825] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2035e40, cid 0, qid 0 00:24:10.430 [2024-07-15 11:50:38.438836] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2035fc0, cid 1, qid 0 00:24:10.430 [2024-07-15 11:50:38.438842] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2036140, cid 2, qid 0 00:24:10.430 [2024-07-15 11:50:38.438849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.430 [2024-07-15 11:50:38.438855] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2036440, cid 4, qid 0 00:24:10.430 [2024-07-15 11:50:38.439000] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.430 [2024-07-15 11:50:38.439007] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.430 [2024-07-15 11:50:38.439012] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.439016] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2036440) on tqpair=0x1fcaf00 00:24:10.430 [2024-07-15 11:50:38.439022] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:10.430 [2024-07-15 11:50:38.439029] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:10.430 [2024-07-15 11:50:38.439040] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.439045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fcaf00) 00:24:10.430 [2024-07-15 11:50:38.439052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.430 [2024-07-15 11:50:38.439064] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2036440, cid 4, qid 0 00:24:10.430 [2024-07-15 11:50:38.439164] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.430 [2024-07-15 11:50:38.439171] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.430 [2024-07-15 11:50:38.439175] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.439180] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fcaf00): datao=0, datal=4096, cccid=4 00:24:10.430 [2024-07-15 11:50:38.439186] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2036440) on tqpair(0x1fcaf00): expected_datao=0, payload_size=4096 00:24:10.430 [2024-07-15 11:50:38.439192] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.439284] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.439289] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.484840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.430 [2024-07-15 11:50:38.484852] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.430 [2024-07-15 11:50:38.484856] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.484861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2036440) on tqpair=0x1fcaf00 00:24:10.430 [2024-07-15 11:50:38.484876] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:10.430 [2024-07-15 11:50:38.484902] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.484907] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fcaf00) 00:24:10.430 [2024-07-15 11:50:38.484916] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.430 [2024-07-15 11:50:38.484924] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.484929] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.484933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fcaf00) 00:24:10.430 [2024-07-15 11:50:38.484940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.430 [2024-07-15 11:50:38.484957] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2036440, cid 4, qid 0 00:24:10.430 [2024-07-15 11:50:38.484964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20365c0, cid 5, qid 0 00:24:10.430 [2024-07-15 11:50:38.485163] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.430 [2024-07-15 11:50:38.485173] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.430 [2024-07-15 11:50:38.485177] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.485182] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fcaf00): datao=0, datal=1024, cccid=4 00:24:10.430 [2024-07-15 11:50:38.485188] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2036440) on tqpair(0x1fcaf00): expected_datao=0, payload_size=1024 00:24:10.430 [2024-07-15 11:50:38.485194] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.430 [2024-07-15 11:50:38.485201] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.431 [2024-07-15 11:50:38.485206] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.431 [2024-07-15 11:50:38.485212] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.431 [2024-07-15 11:50:38.485218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.431 [2024-07-15 11:50:38.485222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.431 [2024-07-15 11:50:38.485227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20365c0) on tqpair=0x1fcaf00 00:24:10.431 [2024-07-15 11:50:38.527004] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.431 [2024-07-15 11:50:38.527020] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.431 [2024-07-15 11:50:38.527025] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.431 [2024-07-15 11:50:38.527030] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2036440) on tqpair=0x1fcaf00 00:24:10.431 [2024-07-15 11:50:38.527051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.431 [2024-07-15 11:50:38.527057] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fcaf00) 00:24:10.431 [2024-07-15 11:50:38.527066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.431 [2024-07-15 11:50:38.527087] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2036440, cid 4, qid 0 00:24:10.431 [2024-07-15 11:50:38.527222] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.431 [2024-07-15 11:50:38.527229] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.431 [2024-07-15 11:50:38.527234] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.431 [2024-07-15 11:50:38.527239] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fcaf00): datao=0, datal=3072, cccid=4 00:24:10.431 [2024-07-15 11:50:38.527245] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2036440) on tqpair(0x1fcaf00): expected_datao=0, payload_size=3072 00:24:10.431 [2024-07-15 11:50:38.527251] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.431 [2024-07-15 11:50:38.527258] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.431 [2024-07-15 11:50:38.527262] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.431 [2024-07-15 11:50:38.527376] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.431 [2024-07-15 11:50:38.527382] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.431 [2024-07-15 11:50:38.527386] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.431 [2024-07-15 11:50:38.527391] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2036440) on tqpair=0x1fcaf00 00:24:10.431 [2024-07-15 11:50:38.527400] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.431 [2024-07-15 11:50:38.527405] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fcaf00) 00:24:10.431 [2024-07-15 11:50:38.527412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.431 [2024-07-15 11:50:38.527428] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2036440, cid 4, qid 0 00:24:10.431 [2024-07-15 11:50:38.527528] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.431 [2024-07-15 11:50:38.527535] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.431 [2024-07-15 11:50:38.527542] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.431 [2024-07-15 11:50:38.527546] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fcaf00): datao=0, datal=8, cccid=4 00:24:10.431 [2024-07-15 11:50:38.527552] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2036440) on tqpair(0x1fcaf00): expected_datao=0, payload_size=8 00:24:10.431 [2024-07-15 11:50:38.527558] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.431 [2024-07-15 11:50:38.527565] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.431 [2024-07-15 11:50:38.527569] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.694 [2024-07-15 11:50:38.572843] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.694 [2024-07-15 11:50:38.572854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.694 [2024-07-15 11:50:38.572859] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.694 [2024-07-15 11:50:38.572864] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2036440) on tqpair=0x1fcaf00 00:24:10.694 ===================================================== 00:24:10.694 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:10.694 ===================================================== 00:24:10.694 Controller Capabilities/Features 00:24:10.694 ================================ 00:24:10.694 Vendor ID: 0000 00:24:10.694 Subsystem Vendor ID: 0000 00:24:10.694 Serial Number: .................... 00:24:10.694 Model Number: ........................................ 00:24:10.694 Firmware Version: 24.09 00:24:10.694 Recommended Arb Burst: 0 00:24:10.694 IEEE OUI Identifier: 00 00 00 00:24:10.694 Multi-path I/O 00:24:10.694 May have multiple subsystem ports: No 00:24:10.694 May have multiple controllers: No 00:24:10.694 Associated with SR-IOV VF: No 00:24:10.694 Max Data Transfer Size: 131072 00:24:10.694 Max Number of Namespaces: 0 00:24:10.694 Max Number of I/O Queues: 1024 00:24:10.694 NVMe Specification Version (VS): 1.3 00:24:10.694 NVMe Specification Version (Identify): 1.3 00:24:10.694 Maximum Queue Entries: 128 00:24:10.694 Contiguous Queues Required: Yes 00:24:10.694 Arbitration Mechanisms Supported 00:24:10.694 Weighted Round Robin: Not Supported 00:24:10.694 Vendor Specific: Not Supported 00:24:10.694 Reset Timeout: 15000 ms 00:24:10.694 Doorbell Stride: 4 bytes 00:24:10.694 NVM Subsystem Reset: Not Supported 00:24:10.694 Command Sets Supported 00:24:10.694 NVM Command Set: Supported 00:24:10.694 Boot Partition: Not Supported 00:24:10.694 Memory Page Size Minimum: 4096 bytes 00:24:10.694 Memory Page Size Maximum: 4096 bytes 00:24:10.694 Persistent Memory Region: Not Supported 00:24:10.694 Optional Asynchronous Events Supported 00:24:10.694 Namespace Attribute Notices: Not Supported 00:24:10.694 Firmware Activation Notices: Not Supported 00:24:10.694 ANA Change Notices: Not Supported 00:24:10.694 PLE Aggregate Log Change Notices: Not Supported 00:24:10.694 LBA Status Info Alert Notices: Not Supported 00:24:10.694 EGE Aggregate Log Change Notices: Not Supported 00:24:10.694 Normal NVM Subsystem Shutdown event: Not Supported 00:24:10.694 Zone Descriptor Change Notices: Not Supported 00:24:10.694 Discovery Log Change Notices: Supported 00:24:10.694 Controller Attributes 00:24:10.694 128-bit Host Identifier: Not Supported 00:24:10.694 Non-Operational Permissive Mode: Not Supported 00:24:10.694 NVM Sets: Not Supported 00:24:10.694 Read Recovery Levels: Not Supported 00:24:10.694 Endurance Groups: Not Supported 00:24:10.694 Predictable Latency Mode: Not Supported 00:24:10.694 Traffic Based Keep ALive: Not Supported 00:24:10.694 Namespace Granularity: Not Supported 00:24:10.694 SQ Associations: Not Supported 00:24:10.694 UUID List: Not Supported 00:24:10.694 Multi-Domain Subsystem: Not Supported 00:24:10.694 Fixed Capacity Management: Not Supported 00:24:10.694 Variable Capacity Management: Not Supported 00:24:10.694 Delete Endurance Group: Not Supported 00:24:10.694 Delete NVM Set: Not Supported 00:24:10.694 Extended LBA Formats Supported: Not Supported 00:24:10.694 Flexible Data Placement Supported: Not Supported 00:24:10.694 00:24:10.694 Controller Memory Buffer Support 00:24:10.694 ================================ 00:24:10.694 Supported: No 00:24:10.694 00:24:10.694 Persistent Memory Region Support 00:24:10.694 ================================ 00:24:10.694 Supported: No 00:24:10.694 00:24:10.694 Admin Command Set Attributes 00:24:10.694 ============================ 00:24:10.694 Security Send/Receive: Not Supported 00:24:10.694 Format NVM: Not Supported 00:24:10.694 Firmware Activate/Download: Not Supported 00:24:10.694 Namespace Management: Not Supported 00:24:10.694 Device Self-Test: Not Supported 00:24:10.694 Directives: Not Supported 00:24:10.694 NVMe-MI: Not Supported 00:24:10.694 Virtualization Management: Not Supported 00:24:10.694 Doorbell Buffer Config: Not Supported 00:24:10.694 Get LBA Status Capability: Not Supported 00:24:10.694 Command & Feature Lockdown Capability: Not Supported 00:24:10.694 Abort Command Limit: 1 00:24:10.694 Async Event Request Limit: 4 00:24:10.694 Number of Firmware Slots: N/A 00:24:10.694 Firmware Slot 1 Read-Only: N/A 00:24:10.694 Firmware Activation Without Reset: N/A 00:24:10.694 Multiple Update Detection Support: N/A 00:24:10.694 Firmware Update Granularity: No Information Provided 00:24:10.694 Per-Namespace SMART Log: No 00:24:10.694 Asymmetric Namespace Access Log Page: Not Supported 00:24:10.694 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:10.694 Command Effects Log Page: Not Supported 00:24:10.694 Get Log Page Extended Data: Supported 00:24:10.694 Telemetry Log Pages: Not Supported 00:24:10.694 Persistent Event Log Pages: Not Supported 00:24:10.694 Supported Log Pages Log Page: May Support 00:24:10.694 Commands Supported & Effects Log Page: Not Supported 00:24:10.694 Feature Identifiers & Effects Log Page:May Support 00:24:10.694 NVMe-MI Commands & Effects Log Page: May Support 00:24:10.694 Data Area 4 for Telemetry Log: Not Supported 00:24:10.694 Error Log Page Entries Supported: 128 00:24:10.694 Keep Alive: Not Supported 00:24:10.694 00:24:10.694 NVM Command Set Attributes 00:24:10.694 ========================== 00:24:10.694 Submission Queue Entry Size 00:24:10.694 Max: 1 00:24:10.694 Min: 1 00:24:10.694 Completion Queue Entry Size 00:24:10.694 Max: 1 00:24:10.694 Min: 1 00:24:10.694 Number of Namespaces: 0 00:24:10.694 Compare Command: Not Supported 00:24:10.694 Write Uncorrectable Command: Not Supported 00:24:10.694 Dataset Management Command: Not Supported 00:24:10.694 Write Zeroes Command: Not Supported 00:24:10.694 Set Features Save Field: Not Supported 00:24:10.694 Reservations: Not Supported 00:24:10.694 Timestamp: Not Supported 00:24:10.694 Copy: Not Supported 00:24:10.694 Volatile Write Cache: Not Present 00:24:10.694 Atomic Write Unit (Normal): 1 00:24:10.694 Atomic Write Unit (PFail): 1 00:24:10.694 Atomic Compare & Write Unit: 1 00:24:10.694 Fused Compare & Write: Supported 00:24:10.694 Scatter-Gather List 00:24:10.694 SGL Command Set: Supported 00:24:10.694 SGL Keyed: Supported 00:24:10.694 SGL Bit Bucket Descriptor: Not Supported 00:24:10.694 SGL Metadata Pointer: Not Supported 00:24:10.694 Oversized SGL: Not Supported 00:24:10.694 SGL Metadata Address: Not Supported 00:24:10.694 SGL Offset: Supported 00:24:10.694 Transport SGL Data Block: Not Supported 00:24:10.694 Replay Protected Memory Block: Not Supported 00:24:10.694 00:24:10.694 Firmware Slot Information 00:24:10.694 ========================= 00:24:10.694 Active slot: 0 00:24:10.694 00:24:10.694 00:24:10.694 Error Log 00:24:10.694 ========= 00:24:10.694 00:24:10.694 Active Namespaces 00:24:10.695 ================= 00:24:10.695 Discovery Log Page 00:24:10.695 ================== 00:24:10.695 Generation Counter: 2 00:24:10.695 Number of Records: 2 00:24:10.695 Record Format: 0 00:24:10.695 00:24:10.695 Discovery Log Entry 0 00:24:10.695 ---------------------- 00:24:10.695 Transport Type: 3 (TCP) 00:24:10.695 Address Family: 1 (IPv4) 00:24:10.695 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:10.695 Entry Flags: 00:24:10.695 Duplicate Returned Information: 1 00:24:10.695 Explicit Persistent Connection Support for Discovery: 1 00:24:10.695 Transport Requirements: 00:24:10.695 Secure Channel: Not Required 00:24:10.695 Port ID: 0 (0x0000) 00:24:10.695 Controller ID: 65535 (0xffff) 00:24:10.695 Admin Max SQ Size: 128 00:24:10.695 Transport Service Identifier: 4420 00:24:10.695 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:10.695 Transport Address: 10.0.0.2 00:24:10.695 Discovery Log Entry 1 00:24:10.695 ---------------------- 00:24:10.695 Transport Type: 3 (TCP) 00:24:10.695 Address Family: 1 (IPv4) 00:24:10.695 Subsystem Type: 2 (NVM Subsystem) 00:24:10.695 Entry Flags: 00:24:10.695 Duplicate Returned Information: 0 00:24:10.695 Explicit Persistent Connection Support for Discovery: 0 00:24:10.695 Transport Requirements: 00:24:10.695 Secure Channel: Not Required 00:24:10.695 Port ID: 0 (0x0000) 00:24:10.695 Controller ID: 65535 (0xffff) 00:24:10.695 Admin Max SQ Size: 128 00:24:10.695 Transport Service Identifier: 4420 00:24:10.695 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:10.695 Transport Address: 10.0.0.2 [2024-07-15 11:50:38.572952] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:10.695 [2024-07-15 11:50:38.572965] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2035e40) on tqpair=0x1fcaf00 00:24:10.695 [2024-07-15 11:50:38.572973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.695 [2024-07-15 11:50:38.572980] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2035fc0) on tqpair=0x1fcaf00 00:24:10.695 [2024-07-15 11:50:38.572985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.695 [2024-07-15 11:50:38.572992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2036140) on tqpair=0x1fcaf00 00:24:10.695 [2024-07-15 11:50:38.572997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.695 [2024-07-15 11:50:38.573003] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.695 [2024-07-15 11:50:38.573009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.695 [2024-07-15 11:50:38.573020] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.573025] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.573030] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.695 [2024-07-15 11:50:38.573039] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.695 [2024-07-15 11:50:38.573054] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.695 [2024-07-15 11:50:38.573148] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.695 [2024-07-15 11:50:38.573155] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.695 [2024-07-15 11:50:38.573159] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.573164] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.695 [2024-07-15 11:50:38.573172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.573176] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.573181] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.695 [2024-07-15 11:50:38.573188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.695 [2024-07-15 11:50:38.573205] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.695 [2024-07-15 11:50:38.573303] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.695 [2024-07-15 11:50:38.573311] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.695 [2024-07-15 11:50:38.573316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.573321] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.695 [2024-07-15 11:50:38.573326] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:10.695 [2024-07-15 11:50:38.573332] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:10.695 [2024-07-15 11:50:38.573343] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.573348] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.573353] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.695 [2024-07-15 11:50:38.573359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.695 [2024-07-15 11:50:38.573371] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.695 [2024-07-15 11:50:38.573463] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.695 [2024-07-15 11:50:38.573470] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.695 [2024-07-15 11:50:38.573474] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.573479] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.695 [2024-07-15 11:50:38.573490] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.573494] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.573499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.695 [2024-07-15 11:50:38.573506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.695 [2024-07-15 11:50:38.573517] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.695 [2024-07-15 11:50:38.573605] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.695 [2024-07-15 11:50:38.573611] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.695 [2024-07-15 11:50:38.573616] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.573621] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.695 [2024-07-15 11:50:38.573631] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.573636] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.573640] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.695 [2024-07-15 11:50:38.573647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.695 [2024-07-15 11:50:38.573659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.695 [2024-07-15 11:50:38.573747] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.695 [2024-07-15 11:50:38.573754] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.695 [2024-07-15 11:50:38.573758] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.573763] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.695 [2024-07-15 11:50:38.573773] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.573778] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.573782] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.695 [2024-07-15 11:50:38.573789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.695 [2024-07-15 11:50:38.573801] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.695 [2024-07-15 11:50:38.573896] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.695 [2024-07-15 11:50:38.573904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.695 [2024-07-15 11:50:38.573908] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.573913] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.695 [2024-07-15 11:50:38.573923] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.573928] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.573933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.695 [2024-07-15 11:50:38.573940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.695 [2024-07-15 11:50:38.573952] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.695 [2024-07-15 11:50:38.574040] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.695 [2024-07-15 11:50:38.574047] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.695 [2024-07-15 11:50:38.574051] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.574056] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.695 [2024-07-15 11:50:38.574066] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.574071] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.574075] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.695 [2024-07-15 11:50:38.574082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.695 [2024-07-15 11:50:38.574093] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.695 [2024-07-15 11:50:38.574179] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.695 [2024-07-15 11:50:38.574186] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.695 [2024-07-15 11:50:38.574190] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.574195] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.695 [2024-07-15 11:50:38.574204] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.574209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.695 [2024-07-15 11:50:38.574213] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.695 [2024-07-15 11:50:38.574220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.695 [2024-07-15 11:50:38.574231] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.695 [2024-07-15 11:50:38.574323] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.696 [2024-07-15 11:50:38.574330] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.696 [2024-07-15 11:50:38.574334] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.574339] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.696 [2024-07-15 11:50:38.574350] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.574354] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.574359] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.696 [2024-07-15 11:50:38.574366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.696 [2024-07-15 11:50:38.574377] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.696 [2024-07-15 11:50:38.574465] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.696 [2024-07-15 11:50:38.574473] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.696 [2024-07-15 11:50:38.574477] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.574482] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.696 [2024-07-15 11:50:38.574492] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.574497] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.574502] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.696 [2024-07-15 11:50:38.574509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.696 [2024-07-15 11:50:38.574520] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.696 [2024-07-15 11:50:38.574608] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.696 [2024-07-15 11:50:38.574615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.696 [2024-07-15 11:50:38.574619] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.574624] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.696 [2024-07-15 11:50:38.574634] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.574638] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.574643] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.696 [2024-07-15 11:50:38.574650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.696 [2024-07-15 11:50:38.574661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.696 [2024-07-15 11:50:38.574749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.696 [2024-07-15 11:50:38.574756] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.696 [2024-07-15 11:50:38.574760] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.574765] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.696 [2024-07-15 11:50:38.574774] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.574779] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.574783] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.696 [2024-07-15 11:50:38.574790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.696 [2024-07-15 11:50:38.574802] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.696 [2024-07-15 11:50:38.574895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.696 [2024-07-15 11:50:38.574902] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.696 [2024-07-15 11:50:38.574906] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.574911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.696 [2024-07-15 11:50:38.574922] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.574926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.574931] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.696 [2024-07-15 11:50:38.574938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.696 [2024-07-15 11:50:38.574950] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.696 [2024-07-15 11:50:38.575038] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.696 [2024-07-15 11:50:38.575044] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.696 [2024-07-15 11:50:38.575052] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.575057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.696 [2024-07-15 11:50:38.575067] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.575072] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.575077] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.696 [2024-07-15 11:50:38.575084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.696 [2024-07-15 11:50:38.575095] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.696 [2024-07-15 11:50:38.575183] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.696 [2024-07-15 11:50:38.575190] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.696 [2024-07-15 11:50:38.575194] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.575199] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.696 [2024-07-15 11:50:38.575209] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.575214] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.575219] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.696 [2024-07-15 11:50:38.575226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.696 [2024-07-15 11:50:38.575237] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.696 [2024-07-15 11:50:38.575322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.696 [2024-07-15 11:50:38.575329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.696 [2024-07-15 11:50:38.575333] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.575338] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.696 [2024-07-15 11:50:38.575347] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.575352] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.575357] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.696 [2024-07-15 11:50:38.575364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.696 [2024-07-15 11:50:38.575375] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.696 [2024-07-15 11:50:38.575555] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.696 [2024-07-15 11:50:38.575562] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.696 [2024-07-15 11:50:38.575566] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.575571] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.696 [2024-07-15 11:50:38.575581] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.575586] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.575590] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.696 [2024-07-15 11:50:38.575597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.696 [2024-07-15 11:50:38.575609] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.696 [2024-07-15 11:50:38.575695] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.696 [2024-07-15 11:50:38.575701] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.696 [2024-07-15 11:50:38.575706] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.575712] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.696 [2024-07-15 11:50:38.575722] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.575727] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.575732] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.696 [2024-07-15 11:50:38.575739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.696 [2024-07-15 11:50:38.575750] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.696 [2024-07-15 11:50:38.575841] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.696 [2024-07-15 11:50:38.575848] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.696 [2024-07-15 11:50:38.575852] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.575857] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.696 [2024-07-15 11:50:38.575867] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.575872] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.575877] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.696 [2024-07-15 11:50:38.575884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.696 [2024-07-15 11:50:38.575895] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.696 [2024-07-15 11:50:38.575986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.696 [2024-07-15 11:50:38.575993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.696 [2024-07-15 11:50:38.575997] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.576002] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.696 [2024-07-15 11:50:38.576012] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.576017] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.576021] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.696 [2024-07-15 11:50:38.576028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.696 [2024-07-15 11:50:38.576040] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.696 [2024-07-15 11:50:38.576202] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.696 [2024-07-15 11:50:38.576208] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.696 [2024-07-15 11:50:38.576213] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.576217] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.696 [2024-07-15 11:50:38.576228] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.576232] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.696 [2024-07-15 11:50:38.576237] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.696 [2024-07-15 11:50:38.576244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.696 [2024-07-15 11:50:38.576255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.697 [2024-07-15 11:50:38.576344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.697 [2024-07-15 11:50:38.576350] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.697 [2024-07-15 11:50:38.576355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.576360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.697 [2024-07-15 11:50:38.576372] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.576377] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.576381] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.697 [2024-07-15 11:50:38.576388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.697 [2024-07-15 11:50:38.576400] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.697 [2024-07-15 11:50:38.576487] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.697 [2024-07-15 11:50:38.576494] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.697 [2024-07-15 11:50:38.576498] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.576503] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.697 [2024-07-15 11:50:38.576513] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.576518] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.576523] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.697 [2024-07-15 11:50:38.576530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.697 [2024-07-15 11:50:38.576541] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.697 [2024-07-15 11:50:38.576700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.697 [2024-07-15 11:50:38.576706] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.697 [2024-07-15 11:50:38.576711] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.576715] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.697 [2024-07-15 11:50:38.576726] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.576731] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.576735] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.697 [2024-07-15 11:50:38.576742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.697 [2024-07-15 11:50:38.576753] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.697 [2024-07-15 11:50:38.580844] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.697 [2024-07-15 11:50:38.580853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.697 [2024-07-15 11:50:38.580858] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.580863] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.697 [2024-07-15 11:50:38.580874] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.580879] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.580884] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fcaf00) 00:24:10.697 [2024-07-15 11:50:38.580891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.697 [2024-07-15 11:50:38.580904] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20362c0, cid 3, qid 0 00:24:10.697 [2024-07-15 11:50:38.581083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.697 [2024-07-15 11:50:38.581090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.697 [2024-07-15 11:50:38.581094] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.581099] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20362c0) on tqpair=0x1fcaf00 00:24:10.697 [2024-07-15 11:50:38.581108] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:24:10.697 00:24:10.697 11:50:38 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:10.697 [2024-07-15 11:50:38.624267] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:24:10.697 [2024-07-15 11:50:38.624306] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2055412 ] 00:24:10.697 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.697 [2024-07-15 11:50:38.655908] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:10.697 [2024-07-15 11:50:38.655951] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:10.697 [2024-07-15 11:50:38.655956] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:10.697 [2024-07-15 11:50:38.655969] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:10.697 [2024-07-15 11:50:38.655976] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:10.697 [2024-07-15 11:50:38.656346] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:10.697 [2024-07-15 11:50:38.656370] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1548f00 0 00:24:10.697 [2024-07-15 11:50:38.670844] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:10.697 [2024-07-15 11:50:38.670859] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:10.697 [2024-07-15 11:50:38.670864] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:10.697 [2024-07-15 11:50:38.670868] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:10.697 [2024-07-15 11:50:38.670902] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.670908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.670913] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1548f00) 00:24:10.697 [2024-07-15 11:50:38.670925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:10.697 [2024-07-15 11:50:38.670942] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3e40, cid 0, qid 0 00:24:10.697 [2024-07-15 11:50:38.678842] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.697 [2024-07-15 11:50:38.678851] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.697 [2024-07-15 11:50:38.678856] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.678861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3e40) on tqpair=0x1548f00 00:24:10.697 [2024-07-15 11:50:38.678873] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:10.697 [2024-07-15 11:50:38.678881] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:10.697 [2024-07-15 11:50:38.678887] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:10.697 [2024-07-15 11:50:38.678900] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.678905] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.678909] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1548f00) 00:24:10.697 [2024-07-15 11:50:38.678918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.697 [2024-07-15 11:50:38.678934] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3e40, cid 0, qid 0 00:24:10.697 [2024-07-15 11:50:38.679124] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.697 [2024-07-15 11:50:38.679131] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.697 [2024-07-15 11:50:38.679136] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.679141] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3e40) on tqpair=0x1548f00 00:24:10.697 [2024-07-15 11:50:38.679146] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:10.697 [2024-07-15 11:50:38.679156] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:10.697 [2024-07-15 11:50:38.679164] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.679168] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.679173] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1548f00) 00:24:10.697 [2024-07-15 11:50:38.679180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.697 [2024-07-15 11:50:38.679193] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3e40, cid 0, qid 0 00:24:10.697 [2024-07-15 11:50:38.679363] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.697 [2024-07-15 11:50:38.679369] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.697 [2024-07-15 11:50:38.679374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.679379] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3e40) on tqpair=0x1548f00 00:24:10.697 [2024-07-15 11:50:38.679384] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:10.697 [2024-07-15 11:50:38.679394] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:10.697 [2024-07-15 11:50:38.679402] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.679406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.679411] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1548f00) 00:24:10.697 [2024-07-15 11:50:38.679418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.697 [2024-07-15 11:50:38.679429] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3e40, cid 0, qid 0 00:24:10.697 [2024-07-15 11:50:38.679591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.697 [2024-07-15 11:50:38.679597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.697 [2024-07-15 11:50:38.679602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.679607] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3e40) on tqpair=0x1548f00 00:24:10.697 [2024-07-15 11:50:38.679612] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:10.697 [2024-07-15 11:50:38.679624] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.679628] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.697 [2024-07-15 11:50:38.679633] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1548f00) 00:24:10.697 [2024-07-15 11:50:38.679640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.697 [2024-07-15 11:50:38.679651] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3e40, cid 0, qid 0 00:24:10.697 [2024-07-15 11:50:38.679794] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.697 [2024-07-15 11:50:38.679803] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.697 [2024-07-15 11:50:38.679808] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.679813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3e40) on tqpair=0x1548f00 00:24:10.698 [2024-07-15 11:50:38.679818] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:10.698 [2024-07-15 11:50:38.679824] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:10.698 [2024-07-15 11:50:38.679838] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:10.698 [2024-07-15 11:50:38.679946] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:10.698 [2024-07-15 11:50:38.679951] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:10.698 [2024-07-15 11:50:38.679960] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.679964] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.679969] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1548f00) 00:24:10.698 [2024-07-15 11:50:38.679976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.698 [2024-07-15 11:50:38.679988] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3e40, cid 0, qid 0 00:24:10.698 [2024-07-15 11:50:38.680081] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.698 [2024-07-15 11:50:38.680088] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.698 [2024-07-15 11:50:38.680093] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.680097] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3e40) on tqpair=0x1548f00 00:24:10.698 [2024-07-15 11:50:38.680103] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:10.698 [2024-07-15 11:50:38.680114] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.680119] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.680123] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1548f00) 00:24:10.698 [2024-07-15 11:50:38.680130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.698 [2024-07-15 11:50:38.680141] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3e40, cid 0, qid 0 00:24:10.698 [2024-07-15 11:50:38.680230] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.698 [2024-07-15 11:50:38.680236] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.698 [2024-07-15 11:50:38.680241] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.680246] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3e40) on tqpair=0x1548f00 00:24:10.698 [2024-07-15 11:50:38.680251] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:10.698 [2024-07-15 11:50:38.680257] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:10.698 [2024-07-15 11:50:38.680266] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:10.698 [2024-07-15 11:50:38.680276] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:10.698 [2024-07-15 11:50:38.680286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.680293] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1548f00) 00:24:10.698 [2024-07-15 11:50:38.680300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.698 [2024-07-15 11:50:38.680312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3e40, cid 0, qid 0 00:24:10.698 [2024-07-15 11:50:38.680521] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.698 [2024-07-15 11:50:38.680528] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.698 [2024-07-15 11:50:38.680532] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.680537] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1548f00): datao=0, datal=4096, cccid=0 00:24:10.698 [2024-07-15 11:50:38.680543] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15b3e40) on tqpair(0x1548f00): expected_datao=0, payload_size=4096 00:24:10.698 [2024-07-15 11:50:38.680548] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.680556] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.680561] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.680656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.698 [2024-07-15 11:50:38.680663] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.698 [2024-07-15 11:50:38.680667] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.680672] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3e40) on tqpair=0x1548f00 00:24:10.698 [2024-07-15 11:50:38.680679] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:10.698 [2024-07-15 11:50:38.680688] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:10.698 [2024-07-15 11:50:38.680694] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:10.698 [2024-07-15 11:50:38.680699] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:10.698 [2024-07-15 11:50:38.680705] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:10.698 [2024-07-15 11:50:38.680711] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:10.698 [2024-07-15 11:50:38.680721] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:10.698 [2024-07-15 11:50:38.680728] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.680733] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.680738] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1548f00) 00:24:10.698 [2024-07-15 11:50:38.680745] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:10.698 [2024-07-15 11:50:38.680758] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3e40, cid 0, qid 0 00:24:10.698 [2024-07-15 11:50:38.680859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.698 [2024-07-15 11:50:38.680866] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.698 [2024-07-15 11:50:38.680870] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.680875] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3e40) on tqpair=0x1548f00 00:24:10.698 [2024-07-15 11:50:38.680882] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.680887] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.680891] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1548f00) 00:24:10.698 [2024-07-15 11:50:38.680898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.698 [2024-07-15 11:50:38.680907] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.680912] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.680916] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1548f00) 00:24:10.698 [2024-07-15 11:50:38.680922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.698 [2024-07-15 11:50:38.680929] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.680934] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.680938] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1548f00) 00:24:10.698 [2024-07-15 11:50:38.680944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.698 [2024-07-15 11:50:38.680951] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.680956] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.680960] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548f00) 00:24:10.698 [2024-07-15 11:50:38.680967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.698 [2024-07-15 11:50:38.680973] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:10.698 [2024-07-15 11:50:38.680985] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:10.698 [2024-07-15 11:50:38.680992] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.698 [2024-07-15 11:50:38.680997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1548f00) 00:24:10.698 [2024-07-15 11:50:38.681004] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.698 [2024-07-15 11:50:38.681017] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3e40, cid 0, qid 0 00:24:10.698 [2024-07-15 11:50:38.681023] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b3fc0, cid 1, qid 0 00:24:10.698 [2024-07-15 11:50:38.681029] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b4140, cid 2, qid 0 00:24:10.698 [2024-07-15 11:50:38.681034] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b42c0, cid 3, qid 0 00:24:10.698 [2024-07-15 11:50:38.681040] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b4440, cid 4, qid 0 00:24:10.698 [2024-07-15 11:50:38.681159] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.698 [2024-07-15 11:50:38.681166] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.699 [2024-07-15 11:50:38.681171] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.681175] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b4440) on tqpair=0x1548f00 00:24:10.699 [2024-07-15 11:50:38.681181] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:10.699 [2024-07-15 11:50:38.681187] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:10.699 [2024-07-15 11:50:38.681197] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:10.699 [2024-07-15 11:50:38.681204] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:10.699 [2024-07-15 11:50:38.681212] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.681217] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.681223] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1548f00) 00:24:10.699 [2024-07-15 11:50:38.681230] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:10.699 [2024-07-15 11:50:38.681241] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b4440, cid 4, qid 0 00:24:10.699 [2024-07-15 11:50:38.681332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.699 [2024-07-15 11:50:38.681339] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.699 [2024-07-15 11:50:38.681343] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.681348] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b4440) on tqpair=0x1548f00 00:24:10.699 [2024-07-15 11:50:38.681402] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:10.699 [2024-07-15 11:50:38.681413] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:10.699 [2024-07-15 11:50:38.681421] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.681426] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1548f00) 00:24:10.699 [2024-07-15 11:50:38.681433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.699 [2024-07-15 11:50:38.681444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b4440, cid 4, qid 0 00:24:10.699 [2024-07-15 11:50:38.681570] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.699 [2024-07-15 11:50:38.681578] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.699 [2024-07-15 11:50:38.681582] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.681587] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1548f00): datao=0, datal=4096, cccid=4 00:24:10.699 [2024-07-15 11:50:38.681593] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15b4440) on tqpair(0x1548f00): expected_datao=0, payload_size=4096 00:24:10.699 [2024-07-15 11:50:38.681598] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.681606] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.681611] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.681706] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.699 [2024-07-15 11:50:38.681713] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.699 [2024-07-15 11:50:38.681717] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.681722] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b4440) on tqpair=0x1548f00 00:24:10.699 [2024-07-15 11:50:38.681731] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:10.699 [2024-07-15 11:50:38.681746] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:10.699 [2024-07-15 11:50:38.681756] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:10.699 [2024-07-15 11:50:38.681764] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.681769] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1548f00) 00:24:10.699 [2024-07-15 11:50:38.681776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.699 [2024-07-15 11:50:38.681788] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b4440, cid 4, qid 0 00:24:10.699 [2024-07-15 11:50:38.681978] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.699 [2024-07-15 11:50:38.681987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.699 [2024-07-15 11:50:38.681991] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.681996] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1548f00): datao=0, datal=4096, cccid=4 00:24:10.699 [2024-07-15 11:50:38.682002] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15b4440) on tqpair(0x1548f00): expected_datao=0, payload_size=4096 00:24:10.699 [2024-07-15 11:50:38.682007] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.682014] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.682019] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.682114] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.699 [2024-07-15 11:50:38.682121] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.699 [2024-07-15 11:50:38.682125] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.682130] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b4440) on tqpair=0x1548f00 00:24:10.699 [2024-07-15 11:50:38.682142] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:10.699 [2024-07-15 11:50:38.682153] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:10.699 [2024-07-15 11:50:38.682161] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.682166] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1548f00) 00:24:10.699 [2024-07-15 11:50:38.682173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.699 [2024-07-15 11:50:38.682185] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b4440, cid 4, qid 0 00:24:10.699 [2024-07-15 11:50:38.682282] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.699 [2024-07-15 11:50:38.682289] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.699 [2024-07-15 11:50:38.682293] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.682298] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1548f00): datao=0, datal=4096, cccid=4 00:24:10.699 [2024-07-15 11:50:38.682303] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15b4440) on tqpair(0x1548f00): expected_datao=0, payload_size=4096 00:24:10.699 [2024-07-15 11:50:38.682309] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.682407] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.682412] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.682548] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.699 [2024-07-15 11:50:38.682554] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.699 [2024-07-15 11:50:38.682558] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.682563] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b4440) on tqpair=0x1548f00 00:24:10.699 [2024-07-15 11:50:38.682571] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:10.699 [2024-07-15 11:50:38.682581] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:10.699 [2024-07-15 11:50:38.682592] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:10.699 [2024-07-15 11:50:38.682599] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:10.699 [2024-07-15 11:50:38.682605] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:10.699 [2024-07-15 11:50:38.682614] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:10.699 [2024-07-15 11:50:38.682620] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:10.699 [2024-07-15 11:50:38.682626] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:10.699 [2024-07-15 11:50:38.682632] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:10.699 [2024-07-15 11:50:38.682647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.682652] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1548f00) 00:24:10.699 [2024-07-15 11:50:38.682659] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.699 [2024-07-15 11:50:38.682667] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.682672] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.682676] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1548f00) 00:24:10.699 [2024-07-15 11:50:38.682683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:10.699 [2024-07-15 11:50:38.682697] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b4440, cid 4, qid 0 00:24:10.699 [2024-07-15 11:50:38.682703] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b45c0, cid 5, qid 0 00:24:10.699 [2024-07-15 11:50:38.682818] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.699 [2024-07-15 11:50:38.682825] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.699 [2024-07-15 11:50:38.682829] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.686840] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b4440) on tqpair=0x1548f00 00:24:10.699 [2024-07-15 11:50:38.686848] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.699 [2024-07-15 11:50:38.686854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.699 [2024-07-15 11:50:38.686859] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.686863] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b45c0) on tqpair=0x1548f00 00:24:10.699 [2024-07-15 11:50:38.686875] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.686880] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1548f00) 00:24:10.699 [2024-07-15 11:50:38.686887] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.699 [2024-07-15 11:50:38.686900] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b45c0, cid 5, qid 0 00:24:10.699 [2024-07-15 11:50:38.687071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.699 [2024-07-15 11:50:38.687078] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.699 [2024-07-15 11:50:38.687082] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.687087] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b45c0) on tqpair=0x1548f00 00:24:10.699 [2024-07-15 11:50:38.687098] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.699 [2024-07-15 11:50:38.687103] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1548f00) 00:24:10.700 [2024-07-15 11:50:38.687109] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.700 [2024-07-15 11:50:38.687121] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b45c0, cid 5, qid 0 00:24:10.700 [2024-07-15 11:50:38.687214] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.700 [2024-07-15 11:50:38.687221] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.700 [2024-07-15 11:50:38.687225] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687230] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b45c0) on tqpair=0x1548f00 00:24:10.700 [2024-07-15 11:50:38.687241] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687246] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1548f00) 00:24:10.700 [2024-07-15 11:50:38.687252] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.700 [2024-07-15 11:50:38.687264] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b45c0, cid 5, qid 0 00:24:10.700 [2024-07-15 11:50:38.687354] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.700 [2024-07-15 11:50:38.687361] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.700 [2024-07-15 11:50:38.687365] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687370] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b45c0) on tqpair=0x1548f00 00:24:10.700 [2024-07-15 11:50:38.687385] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687390] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1548f00) 00:24:10.700 [2024-07-15 11:50:38.687396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.700 [2024-07-15 11:50:38.687404] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687409] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1548f00) 00:24:10.700 [2024-07-15 11:50:38.687415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.700 [2024-07-15 11:50:38.687423] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687428] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1548f00) 00:24:10.700 [2024-07-15 11:50:38.687435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.700 [2024-07-15 11:50:38.687443] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1548f00) 00:24:10.700 [2024-07-15 11:50:38.687454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.700 [2024-07-15 11:50:38.687467] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b45c0, cid 5, qid 0 00:24:10.700 [2024-07-15 11:50:38.687472] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b4440, cid 4, qid 0 00:24:10.700 [2024-07-15 11:50:38.687478] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b4740, cid 6, qid 0 00:24:10.700 [2024-07-15 11:50:38.687483] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b48c0, cid 7, qid 0 00:24:10.700 [2024-07-15 11:50:38.687703] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.700 [2024-07-15 11:50:38.687711] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.700 [2024-07-15 11:50:38.687715] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687720] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1548f00): datao=0, datal=8192, cccid=5 00:24:10.700 [2024-07-15 11:50:38.687726] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15b45c0) on tqpair(0x1548f00): expected_datao=0, payload_size=8192 00:24:10.700 [2024-07-15 11:50:38.687731] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687741] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687746] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687752] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.700 [2024-07-15 11:50:38.687758] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.700 [2024-07-15 11:50:38.687763] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687767] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1548f00): datao=0, datal=512, cccid=4 00:24:10.700 [2024-07-15 11:50:38.687773] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15b4440) on tqpair(0x1548f00): expected_datao=0, payload_size=512 00:24:10.700 [2024-07-15 11:50:38.687779] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687785] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687790] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687796] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.700 [2024-07-15 11:50:38.687802] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.700 [2024-07-15 11:50:38.687806] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687811] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1548f00): datao=0, datal=512, cccid=6 00:24:10.700 [2024-07-15 11:50:38.687817] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15b4740) on tqpair(0x1548f00): expected_datao=0, payload_size=512 00:24:10.700 [2024-07-15 11:50:38.687822] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687829] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687839] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687846] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:10.700 [2024-07-15 11:50:38.687852] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:10.700 [2024-07-15 11:50:38.687856] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687861] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1548f00): datao=0, datal=4096, cccid=7 00:24:10.700 [2024-07-15 11:50:38.687866] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x15b48c0) on tqpair(0x1548f00): expected_datao=0, payload_size=4096 00:24:10.700 [2024-07-15 11:50:38.687872] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687879] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687884] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.700 [2024-07-15 11:50:38.687899] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.700 [2024-07-15 11:50:38.687903] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687908] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b45c0) on tqpair=0x1548f00 00:24:10.700 [2024-07-15 11:50:38.687921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.700 [2024-07-15 11:50:38.687927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.700 [2024-07-15 11:50:38.687931] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687936] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b4440) on tqpair=0x1548f00 00:24:10.700 [2024-07-15 11:50:38.687947] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.700 [2024-07-15 11:50:38.687953] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.700 [2024-07-15 11:50:38.687957] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687962] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b4740) on tqpair=0x1548f00 00:24:10.700 [2024-07-15 11:50:38.687969] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.700 [2024-07-15 11:50:38.687977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.700 [2024-07-15 11:50:38.687982] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.700 [2024-07-15 11:50:38.687986] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b48c0) on tqpair=0x1548f00 00:24:10.700 ===================================================== 00:24:10.700 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:10.700 ===================================================== 00:24:10.700 Controller Capabilities/Features 00:24:10.700 ================================ 00:24:10.700 Vendor ID: 8086 00:24:10.700 Subsystem Vendor ID: 8086 00:24:10.700 Serial Number: SPDK00000000000001 00:24:10.700 Model Number: SPDK bdev Controller 00:24:10.700 Firmware Version: 24.09 00:24:10.700 Recommended Arb Burst: 6 00:24:10.700 IEEE OUI Identifier: e4 d2 5c 00:24:10.700 Multi-path I/O 00:24:10.700 May have multiple subsystem ports: Yes 00:24:10.700 May have multiple controllers: Yes 00:24:10.700 Associated with SR-IOV VF: No 00:24:10.700 Max Data Transfer Size: 131072 00:24:10.700 Max Number of Namespaces: 32 00:24:10.700 Max Number of I/O Queues: 127 00:24:10.700 NVMe Specification Version (VS): 1.3 00:24:10.700 NVMe Specification Version (Identify): 1.3 00:24:10.700 Maximum Queue Entries: 128 00:24:10.700 Contiguous Queues Required: Yes 00:24:10.700 Arbitration Mechanisms Supported 00:24:10.700 Weighted Round Robin: Not Supported 00:24:10.700 Vendor Specific: Not Supported 00:24:10.700 Reset Timeout: 15000 ms 00:24:10.700 Doorbell Stride: 4 bytes 00:24:10.700 NVM Subsystem Reset: Not Supported 00:24:10.700 Command Sets Supported 00:24:10.700 NVM Command Set: Supported 00:24:10.700 Boot Partition: Not Supported 00:24:10.700 Memory Page Size Minimum: 4096 bytes 00:24:10.700 Memory Page Size Maximum: 4096 bytes 00:24:10.700 Persistent Memory Region: Not Supported 00:24:10.700 Optional Asynchronous Events Supported 00:24:10.700 Namespace Attribute Notices: Supported 00:24:10.700 Firmware Activation Notices: Not Supported 00:24:10.700 ANA Change Notices: Not Supported 00:24:10.700 PLE Aggregate Log Change Notices: Not Supported 00:24:10.700 LBA Status Info Alert Notices: Not Supported 00:24:10.700 EGE Aggregate Log Change Notices: Not Supported 00:24:10.700 Normal NVM Subsystem Shutdown event: Not Supported 00:24:10.700 Zone Descriptor Change Notices: Not Supported 00:24:10.700 Discovery Log Change Notices: Not Supported 00:24:10.700 Controller Attributes 00:24:10.700 128-bit Host Identifier: Supported 00:24:10.700 Non-Operational Permissive Mode: Not Supported 00:24:10.700 NVM Sets: Not Supported 00:24:10.700 Read Recovery Levels: Not Supported 00:24:10.700 Endurance Groups: Not Supported 00:24:10.700 Predictable Latency Mode: Not Supported 00:24:10.700 Traffic Based Keep ALive: Not Supported 00:24:10.700 Namespace Granularity: Not Supported 00:24:10.700 SQ Associations: Not Supported 00:24:10.700 UUID List: Not Supported 00:24:10.700 Multi-Domain Subsystem: Not Supported 00:24:10.700 Fixed Capacity Management: Not Supported 00:24:10.700 Variable Capacity Management: Not Supported 00:24:10.700 Delete Endurance Group: Not Supported 00:24:10.700 Delete NVM Set: Not Supported 00:24:10.700 Extended LBA Formats Supported: Not Supported 00:24:10.700 Flexible Data Placement Supported: Not Supported 00:24:10.701 00:24:10.701 Controller Memory Buffer Support 00:24:10.701 ================================ 00:24:10.701 Supported: No 00:24:10.701 00:24:10.701 Persistent Memory Region Support 00:24:10.701 ================================ 00:24:10.701 Supported: No 00:24:10.701 00:24:10.701 Admin Command Set Attributes 00:24:10.701 ============================ 00:24:10.701 Security Send/Receive: Not Supported 00:24:10.701 Format NVM: Not Supported 00:24:10.701 Firmware Activate/Download: Not Supported 00:24:10.701 Namespace Management: Not Supported 00:24:10.701 Device Self-Test: Not Supported 00:24:10.701 Directives: Not Supported 00:24:10.701 NVMe-MI: Not Supported 00:24:10.701 Virtualization Management: Not Supported 00:24:10.701 Doorbell Buffer Config: Not Supported 00:24:10.701 Get LBA Status Capability: Not Supported 00:24:10.701 Command & Feature Lockdown Capability: Not Supported 00:24:10.701 Abort Command Limit: 4 00:24:10.701 Async Event Request Limit: 4 00:24:10.701 Number of Firmware Slots: N/A 00:24:10.701 Firmware Slot 1 Read-Only: N/A 00:24:10.701 Firmware Activation Without Reset: N/A 00:24:10.701 Multiple Update Detection Support: N/A 00:24:10.701 Firmware Update Granularity: No Information Provided 00:24:10.701 Per-Namespace SMART Log: No 00:24:10.701 Asymmetric Namespace Access Log Page: Not Supported 00:24:10.701 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:10.701 Command Effects Log Page: Supported 00:24:10.701 Get Log Page Extended Data: Supported 00:24:10.701 Telemetry Log Pages: Not Supported 00:24:10.701 Persistent Event Log Pages: Not Supported 00:24:10.701 Supported Log Pages Log Page: May Support 00:24:10.701 Commands Supported & Effects Log Page: Not Supported 00:24:10.701 Feature Identifiers & Effects Log Page:May Support 00:24:10.701 NVMe-MI Commands & Effects Log Page: May Support 00:24:10.701 Data Area 4 for Telemetry Log: Not Supported 00:24:10.701 Error Log Page Entries Supported: 128 00:24:10.701 Keep Alive: Supported 00:24:10.701 Keep Alive Granularity: 10000 ms 00:24:10.701 00:24:10.701 NVM Command Set Attributes 00:24:10.701 ========================== 00:24:10.701 Submission Queue Entry Size 00:24:10.701 Max: 64 00:24:10.701 Min: 64 00:24:10.701 Completion Queue Entry Size 00:24:10.701 Max: 16 00:24:10.701 Min: 16 00:24:10.701 Number of Namespaces: 32 00:24:10.701 Compare Command: Supported 00:24:10.701 Write Uncorrectable Command: Not Supported 00:24:10.701 Dataset Management Command: Supported 00:24:10.701 Write Zeroes Command: Supported 00:24:10.701 Set Features Save Field: Not Supported 00:24:10.701 Reservations: Supported 00:24:10.701 Timestamp: Not Supported 00:24:10.701 Copy: Supported 00:24:10.701 Volatile Write Cache: Present 00:24:10.701 Atomic Write Unit (Normal): 1 00:24:10.701 Atomic Write Unit (PFail): 1 00:24:10.701 Atomic Compare & Write Unit: 1 00:24:10.701 Fused Compare & Write: Supported 00:24:10.701 Scatter-Gather List 00:24:10.701 SGL Command Set: Supported 00:24:10.701 SGL Keyed: Supported 00:24:10.701 SGL Bit Bucket Descriptor: Not Supported 00:24:10.701 SGL Metadata Pointer: Not Supported 00:24:10.701 Oversized SGL: Not Supported 00:24:10.701 SGL Metadata Address: Not Supported 00:24:10.701 SGL Offset: Supported 00:24:10.701 Transport SGL Data Block: Not Supported 00:24:10.701 Replay Protected Memory Block: Not Supported 00:24:10.701 00:24:10.701 Firmware Slot Information 00:24:10.701 ========================= 00:24:10.701 Active slot: 1 00:24:10.701 Slot 1 Firmware Revision: 24.09 00:24:10.701 00:24:10.701 00:24:10.701 Commands Supported and Effects 00:24:10.701 ============================== 00:24:10.701 Admin Commands 00:24:10.701 -------------- 00:24:10.701 Get Log Page (02h): Supported 00:24:10.701 Identify (06h): Supported 00:24:10.701 Abort (08h): Supported 00:24:10.701 Set Features (09h): Supported 00:24:10.701 Get Features (0Ah): Supported 00:24:10.701 Asynchronous Event Request (0Ch): Supported 00:24:10.701 Keep Alive (18h): Supported 00:24:10.701 I/O Commands 00:24:10.701 ------------ 00:24:10.701 Flush (00h): Supported LBA-Change 00:24:10.701 Write (01h): Supported LBA-Change 00:24:10.701 Read (02h): Supported 00:24:10.701 Compare (05h): Supported 00:24:10.701 Write Zeroes (08h): Supported LBA-Change 00:24:10.701 Dataset Management (09h): Supported LBA-Change 00:24:10.701 Copy (19h): Supported LBA-Change 00:24:10.701 00:24:10.701 Error Log 00:24:10.701 ========= 00:24:10.701 00:24:10.701 Arbitration 00:24:10.701 =========== 00:24:10.701 Arbitration Burst: 1 00:24:10.701 00:24:10.701 Power Management 00:24:10.701 ================ 00:24:10.701 Number of Power States: 1 00:24:10.701 Current Power State: Power State #0 00:24:10.701 Power State #0: 00:24:10.701 Max Power: 0.00 W 00:24:10.701 Non-Operational State: Operational 00:24:10.701 Entry Latency: Not Reported 00:24:10.701 Exit Latency: Not Reported 00:24:10.701 Relative Read Throughput: 0 00:24:10.701 Relative Read Latency: 0 00:24:10.701 Relative Write Throughput: 0 00:24:10.701 Relative Write Latency: 0 00:24:10.701 Idle Power: Not Reported 00:24:10.701 Active Power: Not Reported 00:24:10.701 Non-Operational Permissive Mode: Not Supported 00:24:10.701 00:24:10.701 Health Information 00:24:10.701 ================== 00:24:10.701 Critical Warnings: 00:24:10.701 Available Spare Space: OK 00:24:10.701 Temperature: OK 00:24:10.701 Device Reliability: OK 00:24:10.701 Read Only: No 00:24:10.701 Volatile Memory Backup: OK 00:24:10.701 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:10.701 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:10.701 Available Spare: 0% 00:24:10.701 Available Spare Threshold: 0% 00:24:10.701 Life Percentage Used:[2024-07-15 11:50:38.688074] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.701 [2024-07-15 11:50:38.688080] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1548f00) 00:24:10.701 [2024-07-15 11:50:38.688088] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.701 [2024-07-15 11:50:38.688102] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b48c0, cid 7, qid 0 00:24:10.701 [2024-07-15 11:50:38.688286] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.701 [2024-07-15 11:50:38.688293] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.701 [2024-07-15 11:50:38.688298] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.701 [2024-07-15 11:50:38.688302] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b48c0) on tqpair=0x1548f00 00:24:10.701 [2024-07-15 11:50:38.688335] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:10.701 [2024-07-15 11:50:38.688345] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3e40) on tqpair=0x1548f00 00:24:10.701 [2024-07-15 11:50:38.688352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.701 [2024-07-15 11:50:38.688359] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b3fc0) on tqpair=0x1548f00 00:24:10.701 [2024-07-15 11:50:38.688364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.701 [2024-07-15 11:50:38.688371] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b4140) on tqpair=0x1548f00 00:24:10.701 [2024-07-15 11:50:38.688376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.701 [2024-07-15 11:50:38.688382] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b42c0) on tqpair=0x1548f00 00:24:10.701 [2024-07-15 11:50:38.688387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:10.701 [2024-07-15 11:50:38.688396] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.701 [2024-07-15 11:50:38.688401] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.701 [2024-07-15 11:50:38.688405] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548f00) 00:24:10.701 [2024-07-15 11:50:38.688412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.701 [2024-07-15 11:50:38.688426] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b42c0, cid 3, qid 0 00:24:10.701 [2024-07-15 11:50:38.688515] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.701 [2024-07-15 11:50:38.688522] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.701 [2024-07-15 11:50:38.688526] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.701 [2024-07-15 11:50:38.688531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b42c0) on tqpair=0x1548f00 00:24:10.701 [2024-07-15 11:50:38.688538] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.701 [2024-07-15 11:50:38.688542] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.701 [2024-07-15 11:50:38.688547] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548f00) 00:24:10.701 [2024-07-15 11:50:38.688554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.701 [2024-07-15 11:50:38.688569] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b42c0, cid 3, qid 0 00:24:10.701 [2024-07-15 11:50:38.688667] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.701 [2024-07-15 11:50:38.688674] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.701 [2024-07-15 11:50:38.688678] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.701 [2024-07-15 11:50:38.688683] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b42c0) on tqpair=0x1548f00 00:24:10.701 [2024-07-15 11:50:38.688688] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:10.701 [2024-07-15 11:50:38.688694] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:10.701 [2024-07-15 11:50:38.688705] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.701 [2024-07-15 11:50:38.688710] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.701 [2024-07-15 11:50:38.688715] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548f00) 00:24:10.702 [2024-07-15 11:50:38.688722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.702 [2024-07-15 11:50:38.688733] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b42c0, cid 3, qid 0 00:24:10.702 [2024-07-15 11:50:38.688823] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.702 [2024-07-15 11:50:38.688829] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.702 [2024-07-15 11:50:38.688838] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.688843] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b42c0) on tqpair=0x1548f00 00:24:10.702 [2024-07-15 11:50:38.688854] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.688859] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.688863] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548f00) 00:24:10.702 [2024-07-15 11:50:38.688871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.702 [2024-07-15 11:50:38.688882] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b42c0, cid 3, qid 0 00:24:10.702 [2024-07-15 11:50:38.688970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.702 [2024-07-15 11:50:38.688977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.702 [2024-07-15 11:50:38.688981] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.688986] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b42c0) on tqpair=0x1548f00 00:24:10.702 [2024-07-15 11:50:38.688996] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.689001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.689006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548f00) 00:24:10.702 [2024-07-15 11:50:38.689013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.702 [2024-07-15 11:50:38.689024] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b42c0, cid 3, qid 0 00:24:10.702 [2024-07-15 11:50:38.689111] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.702 [2024-07-15 11:50:38.689118] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.702 [2024-07-15 11:50:38.689123] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.689127] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b42c0) on tqpair=0x1548f00 00:24:10.702 [2024-07-15 11:50:38.689138] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.689142] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.689147] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548f00) 00:24:10.702 [2024-07-15 11:50:38.689154] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.702 [2024-07-15 11:50:38.689167] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b42c0, cid 3, qid 0 00:24:10.702 [2024-07-15 11:50:38.689255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.702 [2024-07-15 11:50:38.689261] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.702 [2024-07-15 11:50:38.689266] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.689271] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b42c0) on tqpair=0x1548f00 00:24:10.702 [2024-07-15 11:50:38.689280] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.689285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.689289] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548f00) 00:24:10.702 [2024-07-15 11:50:38.689296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.702 [2024-07-15 11:50:38.689308] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b42c0, cid 3, qid 0 00:24:10.702 [2024-07-15 11:50:38.689399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.702 [2024-07-15 11:50:38.689405] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.702 [2024-07-15 11:50:38.689410] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.689414] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b42c0) on tqpair=0x1548f00 00:24:10.702 [2024-07-15 11:50:38.689424] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.689429] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.689433] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548f00) 00:24:10.702 [2024-07-15 11:50:38.689440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.702 [2024-07-15 11:50:38.689451] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b42c0, cid 3, qid 0 00:24:10.702 [2024-07-15 11:50:38.689546] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.702 [2024-07-15 11:50:38.689553] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.702 [2024-07-15 11:50:38.689557] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.689562] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b42c0) on tqpair=0x1548f00 00:24:10.702 [2024-07-15 11:50:38.689572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.689577] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.689581] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548f00) 00:24:10.702 [2024-07-15 11:50:38.689588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.702 [2024-07-15 11:50:38.689600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b42c0, cid 3, qid 0 00:24:10.702 [2024-07-15 11:50:38.689687] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.702 [2024-07-15 11:50:38.689693] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.702 [2024-07-15 11:50:38.689698] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.689703] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b42c0) on tqpair=0x1548f00 00:24:10.702 [2024-07-15 11:50:38.689713] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.689718] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.689722] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548f00) 00:24:10.702 [2024-07-15 11:50:38.689729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.702 [2024-07-15 11:50:38.689742] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b42c0, cid 3, qid 0 00:24:10.702 [2024-07-15 11:50:38.689830] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.702 [2024-07-15 11:50:38.689841] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.702 [2024-07-15 11:50:38.689845] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.689850] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b42c0) on tqpair=0x1548f00 00:24:10.702 [2024-07-15 11:50:38.689860] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.689865] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.689869] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548f00) 00:24:10.702 [2024-07-15 11:50:38.689877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.702 [2024-07-15 11:50:38.689888] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b42c0, cid 3, qid 0 00:24:10.702 [2024-07-15 11:50:38.689977] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.702 [2024-07-15 11:50:38.689984] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.702 [2024-07-15 11:50:38.689988] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.689993] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b42c0) on tqpair=0x1548f00 00:24:10.702 [2024-07-15 11:50:38.690002] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.690007] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.690011] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548f00) 00:24:10.702 [2024-07-15 11:50:38.690018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.702 [2024-07-15 11:50:38.690030] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b42c0, cid 3, qid 0 00:24:10.702 [2024-07-15 11:50:38.690115] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.702 [2024-07-15 11:50:38.690122] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.702 [2024-07-15 11:50:38.690126] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.690131] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b42c0) on tqpair=0x1548f00 00:24:10.702 [2024-07-15 11:50:38.690140] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.690145] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.702 [2024-07-15 11:50:38.690150] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548f00) 00:24:10.702 [2024-07-15 11:50:38.690157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.702 [2024-07-15 11:50:38.690168] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b42c0, cid 3, qid 0 00:24:10.702 [2024-07-15 11:50:38.690257] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.702 [2024-07-15 11:50:38.690263] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.703 [2024-07-15 11:50:38.690268] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.703 [2024-07-15 11:50:38.690273] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b42c0) on tqpair=0x1548f00 00:24:10.703 [2024-07-15 11:50:38.690282] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.703 [2024-07-15 11:50:38.690287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.703 [2024-07-15 11:50:38.690291] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548f00) 00:24:10.703 [2024-07-15 11:50:38.690298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.703 [2024-07-15 11:50:38.690309] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b42c0, cid 3, qid 0 00:24:10.703 [2024-07-15 11:50:38.690397] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.703 [2024-07-15 11:50:38.690403] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.703 [2024-07-15 11:50:38.690408] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.703 [2024-07-15 11:50:38.690413] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b42c0) on tqpair=0x1548f00 00:24:10.703 [2024-07-15 11:50:38.690423] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.703 [2024-07-15 11:50:38.690428] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.703 [2024-07-15 11:50:38.690432] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548f00) 00:24:10.703 [2024-07-15 11:50:38.690439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.703 [2024-07-15 11:50:38.690451] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b42c0, cid 3, qid 0 00:24:10.703 [2024-07-15 11:50:38.690541] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.703 [2024-07-15 11:50:38.690548] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.703 [2024-07-15 11:50:38.690553] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.703 [2024-07-15 11:50:38.690557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b42c0) on tqpair=0x1548f00 00:24:10.703 [2024-07-15 11:50:38.690568] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.703 [2024-07-15 11:50:38.690572] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.703 [2024-07-15 11:50:38.690577] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548f00) 00:24:10.703 [2024-07-15 11:50:38.690584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.703 [2024-07-15 11:50:38.690595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b42c0, cid 3, qid 0 00:24:10.703 [2024-07-15 11:50:38.690685] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.703 [2024-07-15 11:50:38.690692] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.703 [2024-07-15 11:50:38.690697] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.703 [2024-07-15 11:50:38.690701] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b42c0) on tqpair=0x1548f00 00:24:10.703 [2024-07-15 11:50:38.690712] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.703 [2024-07-15 11:50:38.690716] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.703 [2024-07-15 11:50:38.690721] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548f00) 00:24:10.703 [2024-07-15 11:50:38.690728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.703 [2024-07-15 11:50:38.690740] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b42c0, cid 3, qid 0 00:24:10.703 [2024-07-15 11:50:38.690825] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.703 [2024-07-15 11:50:38.694837] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.703 [2024-07-15 11:50:38.694844] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.704 [2024-07-15 11:50:38.694849] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b42c0) on tqpair=0x1548f00 00:24:10.704 [2024-07-15 11:50:38.694860] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:10.704 [2024-07-15 11:50:38.694865] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:10.704 [2024-07-15 11:50:38.694869] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1548f00) 00:24:10.704 [2024-07-15 11:50:38.694877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:10.704 [2024-07-15 11:50:38.694890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x15b42c0, cid 3, qid 0 00:24:10.704 [2024-07-15 11:50:38.695067] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:10.704 [2024-07-15 11:50:38.695076] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:10.704 [2024-07-15 11:50:38.695081] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:10.704 [2024-07-15 11:50:38.695086] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x15b42c0) on tqpair=0x1548f00 00:24:10.704 [2024-07-15 11:50:38.695095] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:24:10.704 0% 00:24:10.704 Data Units Read: 0 00:24:10.704 Data Units Written: 0 00:24:10.704 Host Read Commands: 0 00:24:10.704 Host Write Commands: 0 00:24:10.704 Controller Busy Time: 0 minutes 00:24:10.704 Power Cycles: 0 00:24:10.704 Power On Hours: 0 hours 00:24:10.704 Unsafe Shutdowns: 0 00:24:10.704 Unrecoverable Media Errors: 0 00:24:10.704 Lifetime Error Log Entries: 0 00:24:10.704 Warning Temperature Time: 0 minutes 00:24:10.704 Critical Temperature Time: 0 minutes 00:24:10.704 00:24:10.704 Number of Queues 00:24:10.704 ================ 00:24:10.704 Number of I/O Submission Queues: 127 00:24:10.704 Number of I/O Completion Queues: 127 00:24:10.704 00:24:10.704 Active Namespaces 00:24:10.704 ================= 00:24:10.704 Namespace ID:1 00:24:10.704 Error Recovery Timeout: Unlimited 00:24:10.704 Command Set Identifier: NVM (00h) 00:24:10.704 Deallocate: Supported 00:24:10.704 Deallocated/Unwritten Error: Not Supported 00:24:10.704 Deallocated Read Value: Unknown 00:24:10.704 Deallocate in Write Zeroes: Not Supported 00:24:10.704 Deallocated Guard Field: 0xFFFF 00:24:10.704 Flush: Supported 00:24:10.704 Reservation: Supported 00:24:10.704 Namespace Sharing Capabilities: Multiple Controllers 00:24:10.704 Size (in LBAs): 131072 (0GiB) 00:24:10.704 Capacity (in LBAs): 131072 (0GiB) 00:24:10.704 Utilization (in LBAs): 131072 (0GiB) 00:24:10.704 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:10.704 EUI64: ABCDEF0123456789 00:24:10.704 UUID: fb2cec54-cbdc-4cea-8088-34d13b2a67cf 00:24:10.704 Thin Provisioning: Not Supported 00:24:10.704 Per-NS Atomic Units: Yes 00:24:10.704 Atomic Boundary Size (Normal): 0 00:24:10.704 Atomic Boundary Size (PFail): 0 00:24:10.704 Atomic Boundary Offset: 0 00:24:10.704 Maximum Single Source Range Length: 65535 00:24:10.704 Maximum Copy Length: 65535 00:24:10.704 Maximum Source Range Count: 1 00:24:10.704 NGUID/EUI64 Never Reused: No 00:24:10.704 Namespace Write Protected: No 00:24:10.704 Number of LBA Formats: 1 00:24:10.704 Current LBA Format: LBA Format #00 00:24:10.704 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:10.704 00:24:10.704 11:50:38 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:10.704 11:50:38 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:10.704 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.704 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:10.704 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.704 11:50:38 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:10.704 11:50:38 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:10.704 11:50:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:10.704 11:50:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:10.704 11:50:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:10.704 11:50:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:10.704 11:50:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:10.704 11:50:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:10.704 rmmod nvme_tcp 00:24:10.704 rmmod nvme_fabrics 00:24:10.704 rmmod nvme_keyring 00:24:10.704 11:50:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:10.963 11:50:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:10.963 11:50:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:10.963 11:50:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2055215 ']' 00:24:10.963 11:50:38 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2055215 00:24:10.963 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2055215 ']' 00:24:10.963 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2055215 00:24:10.963 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:24:10.963 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:10.963 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2055215 00:24:10.963 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:10.963 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:10.963 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2055215' 00:24:10.963 killing process with pid 2055215 00:24:10.963 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2055215 00:24:10.963 11:50:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2055215 00:24:10.963 11:50:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:10.963 11:50:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:10.963 11:50:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:10.963 11:50:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:10.963 11:50:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:10.963 11:50:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.963 11:50:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.963 11:50:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.500 11:50:41 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:13.500 00:24:13.500 real 0m10.682s 00:24:13.500 user 0m7.803s 00:24:13.500 sys 0m5.631s 00:24:13.500 11:50:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:13.500 11:50:41 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:13.500 ************************************ 00:24:13.500 END TEST nvmf_identify 00:24:13.500 ************************************ 00:24:13.500 11:50:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:13.500 11:50:41 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:13.500 11:50:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:13.500 11:50:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:13.500 11:50:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:13.500 ************************************ 00:24:13.500 START TEST nvmf_perf 00:24:13.500 ************************************ 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:13.500 * Looking for test storage... 00:24:13.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:13.500 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:13.501 11:50:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:13.501 11:50:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:20.072 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:20.072 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:20.072 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:20.073 Found net devices under 0000:af:00.0: cvl_0_0 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:20.073 Found net devices under 0000:af:00.1: cvl_0_1 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:20.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:24:20.073 00:24:20.073 --- 10.0.0.2 ping statistics --- 00:24:20.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.073 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:24:20.073 00:24:20.073 --- 10.0.0.1 ping statistics --- 00:24:20.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.073 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2059083 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2059083 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2059083 ']' 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:20.073 11:50:47 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:20.073 [2024-07-15 11:50:47.969208] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:24:20.073 [2024-07-15 11:50:47.969256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.073 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.073 [2024-07-15 11:50:48.042861] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:20.073 [2024-07-15 11:50:48.116382] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.073 [2024-07-15 11:50:48.116421] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.073 [2024-07-15 11:50:48.116433] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.073 [2024-07-15 11:50:48.116457] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.073 [2024-07-15 11:50:48.116464] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.073 [2024-07-15 11:50:48.116507] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.073 [2024-07-15 11:50:48.116601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:20.073 [2024-07-15 11:50:48.116688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:20.073 [2024-07-15 11:50:48.116689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.011 11:50:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.011 11:50:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:24:21.011 11:50:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:21.011 11:50:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:21.011 11:50:48 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:21.011 11:50:48 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.011 11:50:48 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:21.012 11:50:48 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:24.301 11:50:51 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:24.301 11:50:51 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:24.301 11:50:52 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:24:24.301 11:50:52 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:24.301 11:50:52 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:24.301 11:50:52 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:24:24.301 11:50:52 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:24.301 11:50:52 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:24.301 11:50:52 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:24.301 [2024-07-15 11:50:52.398449] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.560 11:50:52 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:24.560 11:50:52 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:24.560 11:50:52 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:24.819 11:50:52 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:24.819 11:50:52 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:25.078 11:50:52 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:25.078 [2024-07-15 11:50:53.137224] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.078 11:50:53 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:25.337 11:50:53 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:24:25.337 11:50:53 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:24:25.337 11:50:53 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:25.337 11:50:53 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:24:26.715 Initializing NVMe Controllers 00:24:26.715 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:24:26.715 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:24:26.715 Initialization complete. Launching workers. 00:24:26.715 ======================================================== 00:24:26.715 Latency(us) 00:24:26.715 Device Information : IOPS MiB/s Average min max 00:24:26.715 PCIE (0000:d8:00.0) NSID 1 from core 0: 102676.93 401.08 311.17 24.47 5197.91 00:24:26.715 ======================================================== 00:24:26.715 Total : 102676.93 401.08 311.17 24.47 5197.91 00:24:26.715 00:24:26.715 11:50:54 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:26.715 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.091 Initializing NVMe Controllers 00:24:28.091 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:28.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:28.091 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:28.091 Initialization complete. Launching workers. 00:24:28.091 ======================================================== 00:24:28.091 Latency(us) 00:24:28.091 Device Information : IOPS MiB/s Average min max 00:24:28.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 86.00 0.34 11637.93 247.21 45110.49 00:24:28.091 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 68.00 0.27 14933.65 4982.21 48677.13 00:24:28.091 ======================================================== 00:24:28.091 Total : 154.00 0.60 13093.18 247.21 48677.13 00:24:28.091 00:24:28.091 11:50:55 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:28.091 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.467 Initializing NVMe Controllers 00:24:29.467 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:29.467 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:29.467 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:29.467 Initialization complete. Launching workers. 00:24:29.467 ======================================================== 00:24:29.467 Latency(us) 00:24:29.467 Device Information : IOPS MiB/s Average min max 00:24:29.467 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10132.51 39.58 3158.86 614.79 8182.12 00:24:29.467 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3872.81 15.13 8307.18 5553.84 15968.09 00:24:29.467 ======================================================== 00:24:29.467 Total : 14005.33 54.71 4582.50 614.79 15968.09 00:24:29.467 00:24:29.467 11:50:57 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:29.467 11:50:57 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:29.467 11:50:57 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:29.467 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.051 Initializing NVMe Controllers 00:24:32.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:32.051 Controller IO queue size 128, less than required. 00:24:32.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:32.051 Controller IO queue size 128, less than required. 00:24:32.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:32.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:32.051 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:32.051 Initialization complete. Launching workers. 00:24:32.051 ======================================================== 00:24:32.051 Latency(us) 00:24:32.051 Device Information : IOPS MiB/s Average min max 00:24:32.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1046.01 261.50 127546.37 65422.26 176346.69 00:24:32.051 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 608.13 152.03 218171.53 86927.94 296250.71 00:24:32.051 ======================================================== 00:24:32.051 Total : 1654.14 413.54 160864.04 65422.26 296250.71 00:24:32.051 00:24:32.051 11:50:59 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:32.051 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.051 No valid NVMe controllers or AIO or URING devices found 00:24:32.051 Initializing NVMe Controllers 00:24:32.051 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:32.051 Controller IO queue size 128, less than required. 00:24:32.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:32.051 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:32.051 Controller IO queue size 128, less than required. 00:24:32.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:32.051 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:32.051 WARNING: Some requested NVMe devices were skipped 00:24:32.051 11:50:59 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:32.051 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.597 Initializing NVMe Controllers 00:24:34.597 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:34.597 Controller IO queue size 128, less than required. 00:24:34.597 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:34.597 Controller IO queue size 128, less than required. 00:24:34.597 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:34.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:34.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:34.597 Initialization complete. Launching workers. 00:24:34.597 00:24:34.597 ==================== 00:24:34.597 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:34.597 TCP transport: 00:24:34.597 polls: 35427 00:24:34.597 idle_polls: 9376 00:24:34.597 sock_completions: 26051 00:24:34.597 nvme_completions: 4209 00:24:34.597 submitted_requests: 6376 00:24:34.597 queued_requests: 1 00:24:34.597 00:24:34.597 ==================== 00:24:34.597 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:34.597 TCP transport: 00:24:34.597 polls: 37480 00:24:34.597 idle_polls: 10974 00:24:34.597 sock_completions: 26506 00:24:34.597 nvme_completions: 4185 00:24:34.597 submitted_requests: 6256 00:24:34.597 queued_requests: 1 00:24:34.597 ======================================================== 00:24:34.597 Latency(us) 00:24:34.597 Device Information : IOPS MiB/s Average min max 00:24:34.597 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1050.86 262.71 126234.96 64251.16 223481.14 00:24:34.597 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1044.86 261.22 124642.73 47799.03 174585.08 00:24:34.597 ======================================================== 00:24:34.597 Total : 2095.72 523.93 125441.12 47799.03 223481.14 00:24:34.597 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:34.597 rmmod nvme_tcp 00:24:34.597 rmmod nvme_fabrics 00:24:34.597 rmmod nvme_keyring 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2059083 ']' 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2059083 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2059083 ']' 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2059083 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2059083 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2059083' 00:24:34.597 killing process with pid 2059083 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2059083 00:24:34.597 11:51:02 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2059083 00:24:36.536 11:51:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:36.536 11:51:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:36.536 11:51:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:36.536 11:51:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:36.536 11:51:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:36.536 11:51:04 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.536 11:51:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:36.536 11:51:04 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.075 11:51:06 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:39.075 00:24:39.075 real 0m25.368s 00:24:39.075 user 1m5.518s 00:24:39.075 sys 0m8.447s 00:24:39.075 11:51:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:39.075 11:51:06 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:39.075 ************************************ 00:24:39.075 END TEST nvmf_perf 00:24:39.075 ************************************ 00:24:39.075 11:51:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:39.075 11:51:06 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:39.075 11:51:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:39.075 11:51:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:39.075 11:51:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:39.075 ************************************ 00:24:39.075 START TEST nvmf_fio_host 00:24:39.075 ************************************ 00:24:39.075 11:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:39.075 * Looking for test storage... 00:24:39.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:39.075 11:51:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.075 11:51:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.075 11:51:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.075 11:51:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.075 11:51:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.075 11:51:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.075 11:51:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.075 11:51:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:39.075 11:51:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.075 11:51:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.075 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:39.075 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.075 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.075 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.075 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.075 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:39.076 11:51:06 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:44.346 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:44.346 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:44.346 Found net devices under 0000:af:00.0: cvl_0_0 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:44.346 Found net devices under 0000:af:00.1: cvl_0_1 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:44.346 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:44.347 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:44.347 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:44.606 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:44.606 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:44.606 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:44.606 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:44.606 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:44.606 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:44.606 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:44.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:24:44.606 00:24:44.606 --- 10.0.0.2 ping statistics --- 00:24:44.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.606 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:24:44.606 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:44.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:24:44.606 00:24:44.606 --- 10.0.0.1 ping statistics --- 00:24:44.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.606 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:24:44.606 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.606 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:44.606 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:44.606 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.606 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:44.606 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:44.606 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.606 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:44.606 11:51:12 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:44.865 11:51:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:44.865 11:51:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:44.865 11:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:44.865 11:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.865 11:51:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2065985 00:24:44.865 11:51:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:44.865 11:51:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2065985 00:24:44.865 11:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2065985 ']' 00:24:44.865 11:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.865 11:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:44.865 11:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.865 11:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:44.865 11:51:12 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.865 11:51:12 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:44.865 [2024-07-15 11:51:12.798537] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:24:44.865 [2024-07-15 11:51:12.798590] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.865 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.865 [2024-07-15 11:51:12.871330] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:44.865 [2024-07-15 11:51:12.945265] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.865 [2024-07-15 11:51:12.945306] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.865 [2024-07-15 11:51:12.945315] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:44.865 [2024-07-15 11:51:12.945323] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:44.865 [2024-07-15 11:51:12.945330] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.865 [2024-07-15 11:51:12.945421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.865 [2024-07-15 11:51:12.945534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.865 [2024-07-15 11:51:12.945622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:44.865 [2024-07-15 11:51:12.945623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.800 11:51:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:45.800 11:51:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:24:45.800 11:51:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:45.800 [2024-07-15 11:51:13.746005] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.800 11:51:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:45.800 11:51:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:45.800 11:51:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.800 11:51:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:46.059 Malloc1 00:24:46.059 11:51:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:46.318 11:51:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:46.318 11:51:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.577 [2024-07-15 11:51:14.547246] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.577 11:51:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:46.837 11:51:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:46.837 11:51:14 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:46.837 11:51:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:46.837 11:51:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:46.837 11:51:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:46.837 11:51:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:46.837 11:51:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:46.837 11:51:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:46.837 11:51:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:46.837 11:51:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:46.837 11:51:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:46.837 11:51:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:46.837 11:51:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:46.837 11:51:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:46.837 11:51:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:46.837 11:51:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:46.837 11:51:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:46.837 11:51:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:46.838 11:51:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:46.838 11:51:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:46.838 11:51:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:46.838 11:51:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:46.838 11:51:14 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:47.097 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:47.097 fio-3.35 00:24:47.097 Starting 1 thread 00:24:47.097 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.633 00:24:49.633 test: (groupid=0, jobs=1): err= 0: pid=2066434: Mon Jul 15 11:51:17 2024 00:24:49.633 read: IOPS=12.4k, BW=48.5MiB/s (50.9MB/s)(97.3MiB/2005msec) 00:24:49.633 slat (nsec): min=1513, max=248179, avg=1642.77, stdev=2278.32 00:24:49.633 clat (usec): min=3308, max=10587, avg=5692.14, stdev=440.89 00:24:49.633 lat (usec): min=3342, max=10598, avg=5693.79, stdev=440.96 00:24:49.633 clat percentiles (usec): 00:24:49.633 | 1.00th=[ 4686], 5.00th=[ 5014], 10.00th=[ 5145], 20.00th=[ 5342], 00:24:49.633 | 30.00th=[ 5473], 40.00th=[ 5604], 50.00th=[ 5669], 60.00th=[ 5800], 00:24:49.633 | 70.00th=[ 5866], 80.00th=[ 5997], 90.00th=[ 6194], 95.00th=[ 6325], 00:24:49.633 | 99.00th=[ 6718], 99.50th=[ 7242], 99.90th=[ 9503], 99.95th=[10028], 00:24:49.633 | 99.99th=[10552] 00:24:49.633 bw ( KiB/s): min=48144, max=50496, per=99.96%, avg=49682.00, stdev=1047.85, samples=4 00:24:49.633 iops : min=12036, max=12624, avg=12420.50, stdev=261.96, samples=4 00:24:49.633 write: IOPS=12.4k, BW=48.5MiB/s (50.8MB/s)(97.2MiB/2005msec); 0 zone resets 00:24:49.633 slat (nsec): min=1564, max=236167, avg=1723.46, stdev=1658.12 00:24:49.633 clat (usec): min=2516, max=8788, avg=4548.70, stdev=342.00 00:24:49.633 lat (usec): min=2531, max=8790, avg=4550.42, stdev=341.99 00:24:49.633 clat percentiles (usec): 00:24:49.633 | 1.00th=[ 3720], 5.00th=[ 4015], 10.00th=[ 4146], 20.00th=[ 4293], 00:24:49.633 | 30.00th=[ 4359], 40.00th=[ 4490], 50.00th=[ 4555], 60.00th=[ 4621], 00:24:49.633 | 70.00th=[ 4686], 80.00th=[ 4817], 90.00th=[ 4948], 95.00th=[ 5080], 00:24:49.633 | 99.00th=[ 5276], 99.50th=[ 5407], 99.90th=[ 6587], 99.95th=[ 7373], 00:24:49.633 | 99.99th=[ 8717] 00:24:49.633 bw ( KiB/s): min=48856, max=50248, per=100.00%, avg=49654.00, stdev=585.20, samples=4 00:24:49.633 iops : min=12214, max=12562, avg=12413.50, stdev=146.30, samples=4 00:24:49.633 lat (msec) : 4=2.37%, 10=97.60%, 20=0.03% 00:24:49.633 cpu : usr=63.02%, sys=30.94%, ctx=75, majf=0, minf=6 00:24:49.633 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:24:49.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:49.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:49.633 issued rwts: total=24913,24882,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:49.633 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:49.633 00:24:49.633 Run status group 0 (all jobs): 00:24:49.633 READ: bw=48.5MiB/s (50.9MB/s), 48.5MiB/s-48.5MiB/s (50.9MB/s-50.9MB/s), io=97.3MiB (102MB), run=2005-2005msec 00:24:49.633 WRITE: bw=48.5MiB/s (50.8MB/s), 48.5MiB/s-48.5MiB/s (50.8MB/s-50.8MB/s), io=97.2MiB (102MB), run=2005-2005msec 00:24:49.633 11:51:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:49.633 11:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:49.633 11:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:49.633 11:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:49.633 11:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:49.633 11:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:49.633 11:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:49.633 11:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:49.633 11:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:49.633 11:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:49.633 11:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:49.633 11:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:49.633 11:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:49.633 11:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:49.633 11:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:49.633 11:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:49.633 11:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:49.633 11:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:49.633 11:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:49.633 11:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:49.633 11:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:49.633 11:51:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:49.892 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:49.892 fio-3.35 00:24:49.892 Starting 1 thread 00:24:49.892 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.268 [2024-07-15 11:51:19.271797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1736920 is same with the state(5) to be set 00:24:51.268 [2024-07-15 11:51:19.271866] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1736920 is same with the state(5) to be set 00:24:52.202 00:24:52.202 test: (groupid=0, jobs=1): err= 0: pid=2067073: Mon Jul 15 11:51:20 2024 00:24:52.202 read: IOPS=10.3k, BW=161MiB/s (169MB/s)(322MiB/2005msec) 00:24:52.202 slat (usec): min=2, max=110, avg= 2.75, stdev= 1.60 00:24:52.202 clat (usec): min=2438, max=52372, avg=7694.90, stdev=4062.24 00:24:52.202 lat (usec): min=2441, max=52375, avg=7697.65, stdev=4062.48 00:24:52.202 clat percentiles (usec): 00:24:52.202 | 1.00th=[ 3687], 5.00th=[ 4359], 10.00th=[ 4883], 20.00th=[ 5604], 00:24:52.202 | 30.00th=[ 6128], 40.00th=[ 6587], 50.00th=[ 7046], 60.00th=[ 7439], 00:24:52.202 | 70.00th=[ 8029], 80.00th=[ 8848], 90.00th=[10552], 95.00th=[13173], 00:24:52.202 | 99.00th=[20317], 99.50th=[44827], 99.90th=[51643], 99.95th=[52167], 00:24:52.202 | 99.99th=[52167] 00:24:52.202 bw ( KiB/s): min=68896, max=98560, per=49.33%, avg=81248.00, stdev=12620.74, samples=4 00:24:52.202 iops : min= 4306, max= 6160, avg=5078.00, stdev=788.80, samples=4 00:24:52.202 write: IOPS=6447, BW=101MiB/s (106MB/s)(166MiB/1650msec); 0 zone resets 00:24:52.202 slat (usec): min=28, max=288, avg=30.01, stdev= 6.17 00:24:52.202 clat (usec): min=3414, max=15823, avg=8351.76, stdev=1666.65 00:24:52.202 lat (usec): min=3443, max=15855, avg=8381.77, stdev=1668.50 00:24:52.202 clat percentiles (usec): 00:24:52.202 | 1.00th=[ 5604], 5.00th=[ 6259], 10.00th=[ 6587], 20.00th=[ 7046], 00:24:52.202 | 30.00th=[ 7439], 40.00th=[ 7767], 50.00th=[ 8094], 60.00th=[ 8455], 00:24:52.202 | 70.00th=[ 8848], 80.00th=[ 9503], 90.00th=[10421], 95.00th=[11076], 00:24:52.202 | 99.00th=[14222], 99.50th=[15139], 99.90th=[15795], 99.95th=[15795], 00:24:52.202 | 99.99th=[15795] 00:24:52.202 bw ( KiB/s): min=70656, max=103168, per=82.07%, avg=84664.00, stdev=13686.98, samples=4 00:24:52.202 iops : min= 4416, max= 6448, avg=5291.50, stdev=855.44, samples=4 00:24:52.202 lat (msec) : 4=1.70%, 10=85.56%, 20=12.05%, 50=0.51%, 100=0.18% 00:24:52.202 cpu : usr=80.35%, sys=15.91%, ctx=56, majf=0, minf=3 00:24:52.203 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:52.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:52.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:52.203 issued rwts: total=20638,10638,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:52.203 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:52.203 00:24:52.203 Run status group 0 (all jobs): 00:24:52.203 READ: bw=161MiB/s (169MB/s), 161MiB/s-161MiB/s (169MB/s-169MB/s), io=322MiB (338MB), run=2005-2005msec 00:24:52.203 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=166MiB (174MB), run=1650-1650msec 00:24:52.203 11:51:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:52.460 11:51:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:52.460 11:51:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:52.460 11:51:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:52.460 11:51:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:52.460 11:51:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:52.460 11:51:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:52.460 11:51:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:52.460 11:51:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:52.460 11:51:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:52.460 11:51:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:52.460 rmmod nvme_tcp 00:24:52.460 rmmod nvme_fabrics 00:24:52.460 rmmod nvme_keyring 00:24:52.460 11:51:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:52.460 11:51:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:52.460 11:51:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:52.460 11:51:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2065985 ']' 00:24:52.460 11:51:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2065985 00:24:52.460 11:51:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2065985 ']' 00:24:52.460 11:51:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2065985 00:24:52.460 11:51:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:24:52.460 11:51:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:52.460 11:51:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2065985 00:24:52.719 11:51:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:52.719 11:51:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:52.719 11:51:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2065985' 00:24:52.719 killing process with pid 2065985 00:24:52.719 11:51:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2065985 00:24:52.719 11:51:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2065985 00:24:52.719 11:51:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:52.719 11:51:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:52.719 11:51:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:52.719 11:51:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:52.719 11:51:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:52.719 11:51:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.719 11:51:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.719 11:51:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.251 11:51:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:55.251 00:24:55.251 real 0m16.196s 00:24:55.251 user 0m52.150s 00:24:55.251 sys 0m6.973s 00:24:55.251 11:51:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:55.251 11:51:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.251 ************************************ 00:24:55.251 END TEST nvmf_fio_host 00:24:55.251 ************************************ 00:24:55.251 11:51:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:55.251 11:51:22 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:55.251 11:51:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:55.251 11:51:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:55.251 11:51:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:55.251 ************************************ 00:24:55.251 START TEST nvmf_failover 00:24:55.251 ************************************ 00:24:55.251 11:51:22 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:55.251 * Looking for test storage... 00:24:55.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:55.251 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:55.252 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:55.252 11:51:23 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:55.252 11:51:23 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:55.252 11:51:23 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:55.252 11:51:23 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:55.252 11:51:23 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:55.252 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:55.252 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:55.252 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:55.252 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:55.252 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:55.252 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:55.252 11:51:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:55.252 11:51:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:55.252 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:55.252 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:55.252 11:51:23 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:55.252 11:51:23 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:01.857 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:01.857 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:01.857 Found net devices under 0000:af:00.0: cvl_0_0 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.857 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:01.858 Found net devices under 0000:af:00.1: cvl_0_1 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:01.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:01.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:25:01.858 00:25:01.858 --- 10.0.0.2 ping statistics --- 00:25:01.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.858 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:01.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:01.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:25:01.858 00:25:01.858 --- 10.0.0.1 ping statistics --- 00:25:01.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.858 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2071063 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2071063 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2071063 ']' 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:01.858 11:51:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:01.858 [2024-07-15 11:51:29.879994] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:25:01.858 [2024-07-15 11:51:29.880042] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.858 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.858 [2024-07-15 11:51:29.954082] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:02.118 [2024-07-15 11:51:30.037683] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.118 [2024-07-15 11:51:30.037721] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.118 [2024-07-15 11:51:30.037731] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.118 [2024-07-15 11:51:30.037740] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.118 [2024-07-15 11:51:30.037748] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.118 [2024-07-15 11:51:30.037796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.118 [2024-07-15 11:51:30.037870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:02.118 [2024-07-15 11:51:30.037873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.687 11:51:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:02.687 11:51:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:02.687 11:51:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:02.687 11:51:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:02.687 11:51:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:02.687 11:51:30 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.688 11:51:30 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:02.947 [2024-07-15 11:51:30.884912] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.947 11:51:30 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:03.206 Malloc0 00:25:03.206 11:51:31 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:03.206 11:51:31 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:03.466 11:51:31 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:03.725 [2024-07-15 11:51:31.639207] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.725 11:51:31 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:03.725 [2024-07-15 11:51:31.815697] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:03.984 11:51:31 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:03.984 [2024-07-15 11:51:32.000266] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:03.984 11:51:32 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2071553 00:25:03.984 11:51:32 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:03.984 11:51:32 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:03.984 11:51:32 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2071553 /var/tmp/bdevperf.sock 00:25:03.984 11:51:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2071553 ']' 00:25:03.984 11:51:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:03.984 11:51:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:03.984 11:51:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:03.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:03.985 11:51:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:03.985 11:51:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:04.924 11:51:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:04.924 11:51:32 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:04.924 11:51:32 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:05.183 NVMe0n1 00:25:05.183 11:51:33 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:05.440 00:25:05.440 11:51:33 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:05.440 11:51:33 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2071824 00:25:05.441 11:51:33 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:06.819 11:51:34 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:06.819 [2024-07-15 11:51:34.655600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655694] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655777] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655849] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655865] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655874] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.655994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656053] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656135] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656247] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656255] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 [2024-07-15 11:51:34.656288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6af310 is same with the state(5) to be set 00:25:06.819 11:51:34 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:10.112 11:51:37 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:10.112 00:25:10.112 11:51:38 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:10.112 [2024-07-15 11:51:38.167828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.112 [2024-07-15 11:51:38.167881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.112 [2024-07-15 11:51:38.167891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.167900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.167909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.167918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.167927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.167935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.167944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.167953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.167966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.167975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.167984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.167992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168031] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168140] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168251] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168305] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168314] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168331] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168433] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168442] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168519] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.113 [2024-07-15 11:51:38.168695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168757] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168839] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 [2024-07-15 11:51:38.168936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b06f0 is same with the state(5) to be set 00:25:10.114 11:51:38 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:13.399 11:51:41 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:13.399 [2024-07-15 11:51:41.366018] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:13.399 11:51:41 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:14.334 11:51:42 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:14.594 [2024-07-15 11:51:42.562625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562724] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562793] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562924] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562968] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.562994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563027] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563163] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563229] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 [2024-07-15 11:51:42.563384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6b0dd0 is same with the state(5) to be set 00:25:14.594 11:51:42 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2071824 00:25:21.167 0 00:25:21.167 11:51:48 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2071553 00:25:21.167 11:51:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2071553 ']' 00:25:21.167 11:51:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2071553 00:25:21.167 11:51:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:21.167 11:51:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:21.167 11:51:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2071553 00:25:21.167 11:51:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:21.167 11:51:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:21.167 11:51:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2071553' 00:25:21.167 killing process with pid 2071553 00:25:21.167 11:51:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2071553 00:25:21.167 11:51:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2071553 00:25:21.167 11:51:48 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:21.167 [2024-07-15 11:51:32.072458] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:25:21.167 [2024-07-15 11:51:32.072516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2071553 ] 00:25:21.167 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.167 [2024-07-15 11:51:32.140829] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.167 [2024-07-15 11:51:32.211883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.167 Running I/O for 15 seconds... 00:25:21.167 [2024-07-15 11:51:34.657565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.167 [2024-07-15 11:51:34.657602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.167 [2024-07-15 11:51:34.657622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.167 [2024-07-15 11:51:34.657633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.167 [2024-07-15 11:51:34.657644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.167 [2024-07-15 11:51:34.657654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.167 [2024-07-15 11:51:34.657665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.167 [2024-07-15 11:51:34.657675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.167 [2024-07-15 11:51:34.657685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.167 [2024-07-15 11:51:34.657694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.167 [2024-07-15 11:51:34.657705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:100992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.167 [2024-07-15 11:51:34.657714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.167 [2024-07-15 11:51:34.657725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.167 [2024-07-15 11:51:34.657734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.167 [2024-07-15 11:51:34.657744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.167 [2024-07-15 11:51:34.657753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.167 [2024-07-15 11:51:34.657764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.167 [2024-07-15 11:51:34.657774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.167 [2024-07-15 11:51:34.657784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.167 [2024-07-15 11:51:34.657793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.167 [2024-07-15 11:51:34.657804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:101032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.167 [2024-07-15 11:51:34.657812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.167 [2024-07-15 11:51:34.657828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.167 [2024-07-15 11:51:34.657843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.167 [2024-07-15 11:51:34.657854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.167 [2024-07-15 11:51:34.657863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.167 [2024-07-15 11:51:34.657873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.167 [2024-07-15 11:51:34.657882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.167 [2024-07-15 11:51:34.657894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.167 [2024-07-15 11:51:34.657903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.167 [2024-07-15 11:51:34.657913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.167 [2024-07-15 11:51:34.657922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.167 [2024-07-15 11:51:34.657933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.167 [2024-07-15 11:51:34.657943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.167 [2024-07-15 11:51:34.657954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.167 [2024-07-15 11:51:34.657963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.167 [2024-07-15 11:51:34.657974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.167 [2024-07-15 11:51:34.657982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.167 [2024-07-15 11:51:34.657994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:101064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.167 [2024-07-15 11:51:34.658003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.167 [2024-07-15 11:51:34.658013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.167 [2024-07-15 11:51:34.658022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.167 [2024-07-15 11:51:34.658032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:101088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:101096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:101136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:101160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:101200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.168 [2024-07-15 11:51:34.658578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.168 [2024-07-15 11:51:34.658737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.168 [2024-07-15 11:51:34.658748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.169 [2024-07-15 11:51:34.658757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.658768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.169 [2024-07-15 11:51:34.658777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.658788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.169 [2024-07-15 11:51:34.658797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.658807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.169 [2024-07-15 11:51:34.658816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.658828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.169 [2024-07-15 11:51:34.658842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.658853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.169 [2024-07-15 11:51:34.658862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.658872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.169 [2024-07-15 11:51:34.658881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.658892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.658903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.658913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.658922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.658932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.658942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.658952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.658961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.658971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.658981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.658992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.169 [2024-07-15 11:51:34.659434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.169 [2024-07-15 11:51:34.659445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.170 [2024-07-15 11:51:34.659454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.170 [2024-07-15 11:51:34.659473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.170 [2024-07-15 11:51:34.659493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.170 [2024-07-15 11:51:34.659512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.170 [2024-07-15 11:51:34.659532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.170 [2024-07-15 11:51:34.659552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.170 [2024-07-15 11:51:34.659572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.170 [2024-07-15 11:51:34.659593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.170 [2024-07-15 11:51:34.659612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.170 [2024-07-15 11:51:34.659632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.170 [2024-07-15 11:51:34.659652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.170 [2024-07-15 11:51:34.659672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.170 [2024-07-15 11:51:34.659691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.170 [2024-07-15 11:51:34.659711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.170 [2024-07-15 11:51:34.659730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.170 [2024-07-15 11:51:34.659750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.170 [2024-07-15 11:51:34.659770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.170 [2024-07-15 11:51:34.659789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.170 [2024-07-15 11:51:34.659809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.170 [2024-07-15 11:51:34.659829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.170 [2024-07-15 11:51:34.659867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101848 len:8 PRP1 0x0 PRP2 0x0 00:25:21.170 [2024-07-15 11:51:34.659876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.170 [2024-07-15 11:51:34.659895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.170 [2024-07-15 11:51:34.659903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101856 len:8 PRP1 0x0 PRP2 0x0 00:25:21.170 [2024-07-15 11:51:34.659912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.170 [2024-07-15 11:51:34.659929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.170 [2024-07-15 11:51:34.659937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101864 len:8 PRP1 0x0 PRP2 0x0 00:25:21.170 [2024-07-15 11:51:34.659946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.170 [2024-07-15 11:51:34.659962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.170 [2024-07-15 11:51:34.659970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101872 len:8 PRP1 0x0 PRP2 0x0 00:25:21.170 [2024-07-15 11:51:34.659979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.659988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.170 [2024-07-15 11:51:34.659995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.170 [2024-07-15 11:51:34.660002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101880 len:8 PRP1 0x0 PRP2 0x0 00:25:21.170 [2024-07-15 11:51:34.660011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.660021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.170 [2024-07-15 11:51:34.660029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.170 [2024-07-15 11:51:34.660037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101888 len:8 PRP1 0x0 PRP2 0x0 00:25:21.170 [2024-07-15 11:51:34.660046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.660055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.170 [2024-07-15 11:51:34.660062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.170 [2024-07-15 11:51:34.660069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101896 len:8 PRP1 0x0 PRP2 0x0 00:25:21.170 [2024-07-15 11:51:34.660078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.660087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.170 [2024-07-15 11:51:34.660094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.170 [2024-07-15 11:51:34.660104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101904 len:8 PRP1 0x0 PRP2 0x0 00:25:21.170 [2024-07-15 11:51:34.660113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.660122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.170 [2024-07-15 11:51:34.660129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.170 [2024-07-15 11:51:34.660138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101912 len:8 PRP1 0x0 PRP2 0x0 00:25:21.170 [2024-07-15 11:51:34.660147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.660156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.170 [2024-07-15 11:51:34.660163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.170 [2024-07-15 11:51:34.660171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101920 len:8 PRP1 0x0 PRP2 0x0 00:25:21.170 [2024-07-15 11:51:34.660180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.170 [2024-07-15 11:51:34.660189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.170 [2024-07-15 11:51:34.660196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.170 [2024-07-15 11:51:34.660204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101928 len:8 PRP1 0x0 PRP2 0x0 00:25:21.171 [2024-07-15 11:51:34.660213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:34.660223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.171 [2024-07-15 11:51:34.660230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.171 [2024-07-15 11:51:34.660237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101936 len:8 PRP1 0x0 PRP2 0x0 00:25:21.171 [2024-07-15 11:51:34.660247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:34.660256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.171 [2024-07-15 11:51:34.660263] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.171 [2024-07-15 11:51:34.660271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101944 len:8 PRP1 0x0 PRP2 0x0 00:25:21.171 [2024-07-15 11:51:34.660280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:34.660289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.171 [2024-07-15 11:51:34.660296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.171 [2024-07-15 11:51:34.660303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101952 len:8 PRP1 0x0 PRP2 0x0 00:25:21.171 [2024-07-15 11:51:34.660312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:34.672179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.171 [2024-07-15 11:51:34.672191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.171 [2024-07-15 11:51:34.672199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101960 len:8 PRP1 0x0 PRP2 0x0 00:25:21.171 [2024-07-15 11:51:34.672208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:34.672218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.171 [2024-07-15 11:51:34.672235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.171 [2024-07-15 11:51:34.672243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101968 len:8 PRP1 0x0 PRP2 0x0 00:25:21.171 [2024-07-15 11:51:34.672252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:34.672296] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1581940 was disconnected and freed. reset controller. 00:25:21.171 [2024-07-15 11:51:34.672308] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:21.171 [2024-07-15 11:51:34.672334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.171 [2024-07-15 11:51:34.672344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:34.672354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.171 [2024-07-15 11:51:34.672363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:34.672372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.171 [2024-07-15 11:51:34.672382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:34.672392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.171 [2024-07-15 11:51:34.672401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:34.672410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.171 [2024-07-15 11:51:34.672451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155b590 (9): Bad file descriptor 00:25:21.171 [2024-07-15 11:51:34.675366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.171 [2024-07-15 11:51:34.748316] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:21.171 [2024-07-15 11:51:38.170288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.171 [2024-07-15 11:51:38.170324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:38.170343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.171 [2024-07-15 11:51:38.170354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:38.170365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.171 [2024-07-15 11:51:38.170375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:38.170386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.171 [2024-07-15 11:51:38.170395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:38.170406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.171 [2024-07-15 11:51:38.170416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:38.170431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.171 [2024-07-15 11:51:38.170442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:38.170454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.171 [2024-07-15 11:51:38.170464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:38.170475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.171 [2024-07-15 11:51:38.170484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:38.170495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.171 [2024-07-15 11:51:38.170504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:38.170515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.171 [2024-07-15 11:51:38.170525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:38.170535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.171 [2024-07-15 11:51:38.170545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:38.170556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.171 [2024-07-15 11:51:38.170567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:38.170580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.171 [2024-07-15 11:51:38.170591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:38.170603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.171 [2024-07-15 11:51:38.170612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:38.170623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.171 [2024-07-15 11:51:38.170632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:38.170643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.171 [2024-07-15 11:51:38.170652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:38.170662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.171 [2024-07-15 11:51:38.170672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:38.170683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.171 [2024-07-15 11:51:38.170693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:38.170704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.171 [2024-07-15 11:51:38.170713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.171 [2024-07-15 11:51:38.170724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.171 [2024-07-15 11:51:38.170734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.170746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.170755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.170767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.170776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.170787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.170796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.170807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.170816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.170827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.170842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.170853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.170863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.170875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.170886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.170897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.170908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.170919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.170928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.170939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.170948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.170960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.170969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.170981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:66608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.170990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.171001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:66616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.171010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.171021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.171030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.171041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.171050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.171060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.171069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.171080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.171089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.171100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:66656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.171109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.171119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.171128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.171139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:66672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.171148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.171158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.171167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.171177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.171187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.171197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:66696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.171207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.171218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.171227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.171238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.171247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.172 [2024-07-15 11:51:38.171257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.172 [2024-07-15 11:51:38.171266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.173 [2024-07-15 11:51:38.171286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.173 [2024-07-15 11:51:38.171305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.173 [2024-07-15 11:51:38.171326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.173 [2024-07-15 11:51:38.171346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.171984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.171994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.173 [2024-07-15 11:51:38.172003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.173 [2024-07-15 11:51:38.172013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.174 [2024-07-15 11:51:38.172023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.174 [2024-07-15 11:51:38.172042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.174 [2024-07-15 11:51:38.172061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.174 [2024-07-15 11:51:38.172082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.174 [2024-07-15 11:51:38.172101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.174 [2024-07-15 11:51:38.172120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.174 [2024-07-15 11:51:38.172140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.174 [2024-07-15 11:51:38.172160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.174 [2024-07-15 11:51:38.172179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.174 [2024-07-15 11:51:38.172199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.174 [2024-07-15 11:51:38.172220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.174 [2024-07-15 11:51:38.172239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.174 [2024-07-15 11:51:38.172259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.174 [2024-07-15 11:51:38.172278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.174 [2024-07-15 11:51:38.172298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.174 [2024-07-15 11:51:38.172317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:67152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.174 [2024-07-15 11:51:38.172337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.174 [2024-07-15 11:51:38.172358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.174 [2024-07-15 11:51:38.172378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.174 [2024-07-15 11:51:38.172401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.174 [2024-07-15 11:51:38.172432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67184 len:8 PRP1 0x0 PRP2 0x0 00:25:21.174 [2024-07-15 11:51:38.172441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.174 [2024-07-15 11:51:38.172461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.174 [2024-07-15 11:51:38.172469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67192 len:8 PRP1 0x0 PRP2 0x0 00:25:21.174 [2024-07-15 11:51:38.172478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.174 [2024-07-15 11:51:38.172498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.174 [2024-07-15 11:51:38.172506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67200 len:8 PRP1 0x0 PRP2 0x0 00:25:21.174 [2024-07-15 11:51:38.172515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.174 [2024-07-15 11:51:38.172532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.174 [2024-07-15 11:51:38.172541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67208 len:8 PRP1 0x0 PRP2 0x0 00:25:21.174 [2024-07-15 11:51:38.172550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.174 [2024-07-15 11:51:38.172566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.174 [2024-07-15 11:51:38.172574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67216 len:8 PRP1 0x0 PRP2 0x0 00:25:21.174 [2024-07-15 11:51:38.172584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.174 [2024-07-15 11:51:38.172602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.174 [2024-07-15 11:51:38.172609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67224 len:8 PRP1 0x0 PRP2 0x0 00:25:21.174 [2024-07-15 11:51:38.172618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.174 [2024-07-15 11:51:38.172636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.174 [2024-07-15 11:51:38.172643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67232 len:8 PRP1 0x0 PRP2 0x0 00:25:21.174 [2024-07-15 11:51:38.172652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.174 [2024-07-15 11:51:38.172669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.174 [2024-07-15 11:51:38.172677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67240 len:8 PRP1 0x0 PRP2 0x0 00:25:21.174 [2024-07-15 11:51:38.172688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172698] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.174 [2024-07-15 11:51:38.172705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.174 [2024-07-15 11:51:38.172713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67248 len:8 PRP1 0x0 PRP2 0x0 00:25:21.174 [2024-07-15 11:51:38.172721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.174 [2024-07-15 11:51:38.172740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.174 [2024-07-15 11:51:38.172747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67256 len:8 PRP1 0x0 PRP2 0x0 00:25:21.174 [2024-07-15 11:51:38.172758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.174 [2024-07-15 11:51:38.172774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.174 [2024-07-15 11:51:38.172784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67264 len:8 PRP1 0x0 PRP2 0x0 00:25:21.174 [2024-07-15 11:51:38.172793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.174 [2024-07-15 11:51:38.172810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.174 [2024-07-15 11:51:38.172817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67272 len:8 PRP1 0x0 PRP2 0x0 00:25:21.174 [2024-07-15 11:51:38.172826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.174 [2024-07-15 11:51:38.172839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.174 [2024-07-15 11:51:38.172847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.174 [2024-07-15 11:51:38.172854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67280 len:8 PRP1 0x0 PRP2 0x0 00:25:21.175 [2024-07-15 11:51:38.172864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:38.172875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.175 [2024-07-15 11:51:38.172882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.175 [2024-07-15 11:51:38.172890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67288 len:8 PRP1 0x0 PRP2 0x0 00:25:21.175 [2024-07-15 11:51:38.172899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:38.172909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.175 [2024-07-15 11:51:38.172918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.175 [2024-07-15 11:51:38.172925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67296 len:8 PRP1 0x0 PRP2 0x0 00:25:21.175 [2024-07-15 11:51:38.172934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:38.172943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.175 [2024-07-15 11:51:38.172950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.175 [2024-07-15 11:51:38.172959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67304 len:8 PRP1 0x0 PRP2 0x0 00:25:21.175 [2024-07-15 11:51:38.172970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:38.172980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.175 [2024-07-15 11:51:38.172987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.175 [2024-07-15 11:51:38.172994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67312 len:8 PRP1 0x0 PRP2 0x0 00:25:21.175 [2024-07-15 11:51:38.173003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:38.173012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.175 [2024-07-15 11:51:38.173019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.175 [2024-07-15 11:51:38.173029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67320 len:8 PRP1 0x0 PRP2 0x0 00:25:21.175 [2024-07-15 11:51:38.173039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:38.173050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.175 [2024-07-15 11:51:38.173058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.175 [2024-07-15 11:51:38.186130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67328 len:8 PRP1 0x0 PRP2 0x0 00:25:21.175 [2024-07-15 11:51:38.186147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:38.186160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.175 [2024-07-15 11:51:38.186170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.175 [2024-07-15 11:51:38.186179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67336 len:8 PRP1 0x0 PRP2 0x0 00:25:21.175 [2024-07-15 11:51:38.186192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:38.186204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.175 [2024-07-15 11:51:38.186213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.175 [2024-07-15 11:51:38.186223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67344 len:8 PRP1 0x0 PRP2 0x0 00:25:21.175 [2024-07-15 11:51:38.186235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:38.186248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.175 [2024-07-15 11:51:38.186258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.175 [2024-07-15 11:51:38.186268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67352 len:8 PRP1 0x0 PRP2 0x0 00:25:21.175 [2024-07-15 11:51:38.186279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:38.186291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.175 [2024-07-15 11:51:38.186301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.175 [2024-07-15 11:51:38.186311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67360 len:8 PRP1 0x0 PRP2 0x0 00:25:21.175 [2024-07-15 11:51:38.186323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:38.186335] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.175 [2024-07-15 11:51:38.186344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.175 [2024-07-15 11:51:38.186354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67368 len:8 PRP1 0x0 PRP2 0x0 00:25:21.175 [2024-07-15 11:51:38.186367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:38.186380] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.175 [2024-07-15 11:51:38.186389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.175 [2024-07-15 11:51:38.186399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67376 len:8 PRP1 0x0 PRP2 0x0 00:25:21.175 [2024-07-15 11:51:38.186410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:38.186463] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17263c0 was disconnected and freed. reset controller. 00:25:21.175 [2024-07-15 11:51:38.186479] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:21.175 [2024-07-15 11:51:38.186510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.175 [2024-07-15 11:51:38.186523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:38.186536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.175 [2024-07-15 11:51:38.186548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:38.186562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.175 [2024-07-15 11:51:38.186580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:38.186597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.175 [2024-07-15 11:51:38.186609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:38.186621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.175 [2024-07-15 11:51:38.186651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155b590 (9): Bad file descriptor 00:25:21.175 [2024-07-15 11:51:38.190260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.175 [2024-07-15 11:51:38.219101] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:21.175 [2024-07-15 11:51:42.563737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.175 [2024-07-15 11:51:42.563774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:42.563793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.175 [2024-07-15 11:51:42.563803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:42.563815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.175 [2024-07-15 11:51:42.563824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:42.563840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.175 [2024-07-15 11:51:42.563850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:42.563861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.175 [2024-07-15 11:51:42.563870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:42.563881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.175 [2024-07-15 11:51:42.563891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:42.563901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.175 [2024-07-15 11:51:42.563914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:42.563925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.175 [2024-07-15 11:51:42.563935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:42.563946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.175 [2024-07-15 11:51:42.563955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:42.563965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.175 [2024-07-15 11:51:42.563974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:42.563985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.175 [2024-07-15 11:51:42.563994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:42.564005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.175 [2024-07-15 11:51:42.564014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:42.564024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.175 [2024-07-15 11:51:42.564034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:42.564044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.175 [2024-07-15 11:51:42.564053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:42.564064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:90072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.175 [2024-07-15 11:51:42.564073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:42.564083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.175 [2024-07-15 11:51:42.564092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.175 [2024-07-15 11:51:42.564102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.175 [2024-07-15 11:51:42.564112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:90112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:90152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:90184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.176 [2024-07-15 11:51:42.564619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.176 [2024-07-15 11:51:42.564628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.177 [2024-07-15 11:51:42.564638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.177 [2024-07-15 11:51:42.564647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.177 [2024-07-15 11:51:42.564659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.177 [2024-07-15 11:51:42.564668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.177 [2024-07-15 11:51:42.564679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.177 [2024-07-15 11:51:42.564687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.177 [2024-07-15 11:51:42.564698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.177 [2024-07-15 11:51:42.564707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.177 [2024-07-15 11:51:42.564717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.177 [2024-07-15 11:51:42.564727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.177 [2024-07-15 11:51:42.564737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.177 [2024-07-15 11:51:42.564747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.177 [2024-07-15 11:51:42.564758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.177 [2024-07-15 11:51:42.564767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.177 [2024-07-15 11:51:42.564777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.177 [2024-07-15 11:51:42.564787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.177 [2024-07-15 11:51:42.564798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.177 [2024-07-15 11:51:42.564806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.177 [2024-07-15 11:51:42.564817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.177 [2024-07-15 11:51:42.564826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.177 [2024-07-15 11:51:42.564840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.177 [2024-07-15 11:51:42.564850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.177 [2024-07-15 11:51:42.564860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.177 [2024-07-15 11:51:42.564869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.177 [2024-07-15 11:51:42.564880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.177 [2024-07-15 11:51:42.564889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.177 [2024-07-15 11:51:42.564900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.177 [2024-07-15 11:51:42.564910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.177 [2024-07-15 11:51:42.564921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.177 [2024-07-15 11:51:42.564930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.177 [2024-07-15 11:51:42.564941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.177 [2024-07-15 11:51:42.564950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.177 [2024-07-15 11:51:42.564961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.177 [2024-07-15 11:51:42.564970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.177 [2024-07-15 11:51:42.564981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.177 [2024-07-15 11:51:42.564990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.177 [2024-07-15 11:51:42.565000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:90448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.177 [2024-07-15 11:51:42.565010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.177 [2024-07-15 11:51:42.565020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.177 [2024-07-15 11:51:42.565029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.177 [2024-07-15 11:51:42.565039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.177 [2024-07-15 11:51:42.565048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.177 [2024-07-15 11:51:42.565059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.177 [2024-07-15 11:51:42.565070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:21.178 [2024-07-15 11:51:42.565662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.178 [2024-07-15 11:51:42.565682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.178 [2024-07-15 11:51:42.565703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.178 [2024-07-15 11:51:42.565722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:90744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.178 [2024-07-15 11:51:42.565742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.178 [2024-07-15 11:51:42.565761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.178 [2024-07-15 11:51:42.565780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.178 [2024-07-15 11:51:42.565800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:90776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.178 [2024-07-15 11:51:42.565821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.178 [2024-07-15 11:51:42.565846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.178 [2024-07-15 11:51:42.565865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.178 [2024-07-15 11:51:42.565885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.178 [2024-07-15 11:51:42.565895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.178 [2024-07-15 11:51:42.565904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.565916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.179 [2024-07-15 11:51:42.565925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.565936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.179 [2024-07-15 11:51:42.565945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.565955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.179 [2024-07-15 11:51:42.565964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.565974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.179 [2024-07-15 11:51:42.565983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.565993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.179 [2024-07-15 11:51:42.566002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.566013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.179 [2024-07-15 11:51:42.566022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.566033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.179 [2024-07-15 11:51:42.566042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.566052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.179 [2024-07-15 11:51:42.566061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.566071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.179 [2024-07-15 11:51:42.566080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.566091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.179 [2024-07-15 11:51:42.566100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.566110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.179 [2024-07-15 11:51:42.566119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.566129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:90904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.179 [2024-07-15 11:51:42.566139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.566150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.179 [2024-07-15 11:51:42.566159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.566171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.179 [2024-07-15 11:51:42.566179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.566190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.179 [2024-07-15 11:51:42.566199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.566210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.179 [2024-07-15 11:51:42.566219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.566229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.179 [2024-07-15 11:51:42.566238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.566248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.179 [2024-07-15 11:51:42.566257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.566268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.179 [2024-07-15 11:51:42.566276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.566287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:21.179 [2024-07-15 11:51:42.566296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.566318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:21.179 [2024-07-15 11:51:42.566326] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:21.179 [2024-07-15 11:51:42.566334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90976 len:8 PRP1 0x0 PRP2 0x0 00:25:21.179 [2024-07-15 11:51:42.566343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.566389] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x17261b0 was disconnected and freed. reset controller. 00:25:21.179 [2024-07-15 11:51:42.566401] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:21.179 [2024-07-15 11:51:42.566423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.179 [2024-07-15 11:51:42.566433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.566443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.179 [2024-07-15 11:51:42.566452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.566461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.179 [2024-07-15 11:51:42.566470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.566482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:21.179 [2024-07-15 11:51:42.566491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:21.179 [2024-07-15 11:51:42.566500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.179 [2024-07-15 11:51:42.566525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x155b590 (9): Bad file descriptor 00:25:21.179 [2024-07-15 11:51:42.569212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:21.179 [2024-07-15 11:51:42.641552] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:21.179 00:25:21.179 Latency(us) 00:25:21.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.179 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:21.179 Verification LBA range: start 0x0 length 0x4000 00:25:21.179 NVMe0n1 : 15.01 11853.05 46.30 549.35 0.00 10299.41 809.37 24326.96 00:25:21.179 =================================================================================================================== 00:25:21.179 Total : 11853.05 46.30 549.35 0.00 10299.41 809.37 24326.96 00:25:21.179 Received shutdown signal, test time was about 15.000000 seconds 00:25:21.179 00:25:21.179 Latency(us) 00:25:21.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.179 =================================================================================================================== 00:25:21.179 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:21.179 11:51:48 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:21.179 11:51:48 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:21.179 11:51:48 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:21.179 11:51:48 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2074230 00:25:21.179 11:51:48 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:21.179 11:51:48 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2074230 /var/tmp/bdevperf.sock 00:25:21.179 11:51:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2074230 ']' 00:25:21.179 11:51:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:21.179 11:51:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:21.179 11:51:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:21.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:21.179 11:51:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:21.179 11:51:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:21.754 11:51:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:21.754 11:51:49 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:21.754 11:51:49 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:22.015 [2024-07-15 11:51:49.897532] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:22.015 11:51:49 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:22.015 [2024-07-15 11:51:50.078133] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:22.015 11:51:50 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:22.582 NVMe0n1 00:25:22.582 11:51:50 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:22.841 00:25:22.841 11:51:50 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:23.100 00:25:23.100 11:51:50 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:23.100 11:51:50 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:23.100 11:51:51 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:23.359 11:51:51 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:26.647 11:51:54 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:26.647 11:51:54 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:26.647 11:51:54 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2075263 00:25:26.647 11:51:54 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:26.647 11:51:54 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2075263 00:25:27.607 0 00:25:27.607 11:51:55 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:27.607 [2024-07-15 11:51:48.942636] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:25:27.607 [2024-07-15 11:51:48.942693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2074230 ] 00:25:27.607 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.607 [2024-07-15 11:51:49.013825] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.607 [2024-07-15 11:51:49.078515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.607 [2024-07-15 11:51:51.323189] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:27.607 [2024-07-15 11:51:51.323238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.607 [2024-07-15 11:51:51.323253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.607 [2024-07-15 11:51:51.323265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.607 [2024-07-15 11:51:51.323274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.607 [2024-07-15 11:51:51.323284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.607 [2024-07-15 11:51:51.323294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.608 [2024-07-15 11:51:51.323304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.608 [2024-07-15 11:51:51.323314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.608 [2024-07-15 11:51:51.323323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:27.608 [2024-07-15 11:51:51.323354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:27.608 [2024-07-15 11:51:51.323372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x253b590 (9): Bad file descriptor 00:25:27.608 [2024-07-15 11:51:51.373231] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:27.608 Running I/O for 1 seconds... 00:25:27.608 00:25:27.608 Latency(us) 00:25:27.608 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.608 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:27.608 Verification LBA range: start 0x0 length 0x4000 00:25:27.608 NVMe0n1 : 1.01 11996.70 46.86 0.00 0.00 10627.84 2424.83 17825.79 00:25:27.608 =================================================================================================================== 00:25:27.608 Total : 11996.70 46.86 0.00 0.00 10627.84 2424.83 17825.79 00:25:27.608 11:51:55 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:27.608 11:51:55 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:27.873 11:51:55 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:28.131 11:51:56 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:28.131 11:51:56 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:28.131 11:51:56 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:28.389 11:51:56 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:31.677 11:51:59 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:31.677 11:51:59 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:31.677 11:51:59 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2074230 00:25:31.677 11:51:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2074230 ']' 00:25:31.677 11:51:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2074230 00:25:31.677 11:51:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:31.677 11:51:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:31.677 11:51:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2074230 00:25:31.677 11:51:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:31.677 11:51:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:31.677 11:51:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2074230' 00:25:31.677 killing process with pid 2074230 00:25:31.677 11:51:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2074230 00:25:31.677 11:51:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2074230 00:25:31.935 11:51:59 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:31.935 11:51:59 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:31.935 11:51:59 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:31.935 11:51:59 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:31.935 11:51:59 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:31.935 11:51:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:31.935 11:51:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:31.935 11:52:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:31.935 11:52:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:31.935 11:52:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:31.935 11:52:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:31.935 rmmod nvme_tcp 00:25:31.935 rmmod nvme_fabrics 00:25:32.221 rmmod nvme_keyring 00:25:32.221 11:52:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:32.221 11:52:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:32.221 11:52:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:32.221 11:52:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2071063 ']' 00:25:32.221 11:52:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2071063 00:25:32.221 11:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2071063 ']' 00:25:32.221 11:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2071063 00:25:32.221 11:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:25:32.221 11:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:32.221 11:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2071063 00:25:32.221 11:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:32.221 11:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:32.221 11:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2071063' 00:25:32.221 killing process with pid 2071063 00:25:32.221 11:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2071063 00:25:32.221 11:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2071063 00:25:32.481 11:52:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:32.481 11:52:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:32.481 11:52:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:32.481 11:52:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:32.481 11:52:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:32.481 11:52:00 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.481 11:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:32.481 11:52:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.386 11:52:02 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:34.386 00:25:34.386 real 0m39.476s 00:25:34.386 user 2m1.687s 00:25:34.386 sys 0m9.998s 00:25:34.386 11:52:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:34.386 11:52:02 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:34.386 ************************************ 00:25:34.386 END TEST nvmf_failover 00:25:34.386 ************************************ 00:25:34.386 11:52:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:34.386 11:52:02 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:34.386 11:52:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:34.386 11:52:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:34.386 11:52:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:34.646 ************************************ 00:25:34.646 START TEST nvmf_host_discovery 00:25:34.646 ************************************ 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:34.646 * Looking for test storage... 00:25:34.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:34.646 11:52:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.226 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:41.226 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:41.226 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:41.226 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:41.226 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:41.226 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:41.226 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:41.226 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:41.226 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:41.226 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:41.226 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:41.226 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:41.226 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:41.226 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:41.226 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:41.226 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:41.227 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:41.227 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:41.227 Found net devices under 0000:af:00.0: cvl_0_0 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:41.227 Found net devices under 0000:af:00.1: cvl_0_1 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:41.227 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:41.486 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:41.486 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:41.486 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:41.486 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:41.486 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:41.486 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:41.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:25:41.746 00:25:41.746 --- 10.0.0.2 ping statistics --- 00:25:41.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.746 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:41.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:25:41.746 00:25:41.746 --- 10.0.0.1 ping statistics --- 00:25:41.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.746 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2079782 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2079782 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2079782 ']' 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:41.746 11:52:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:41.746 [2024-07-15 11:52:09.725086] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:25:41.746 [2024-07-15 11:52:09.725131] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.746 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.746 [2024-07-15 11:52:09.798873] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.004 [2024-07-15 11:52:09.867711] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.004 [2024-07-15 11:52:09.867753] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.004 [2024-07-15 11:52:09.867764] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.004 [2024-07-15 11:52:09.867772] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.004 [2024-07-15 11:52:09.867796] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.004 [2024-07-15 11:52:09.867818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.572 [2024-07-15 11:52:10.567329] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.572 [2024-07-15 11:52:10.575503] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.572 null0 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.572 null1 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2080019 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2080019 /tmp/host.sock 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2080019 ']' 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:42.572 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:42.572 11:52:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:42.573 11:52:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:42.573 [2024-07-15 11:52:10.653464] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:25:42.573 [2024-07-15 11:52:10.653512] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2080019 ] 00:25:42.832 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.832 [2024-07-15 11:52:10.722211] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.832 [2024-07-15 11:52:10.796601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.399 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:43.399 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:25:43.399 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:43.399 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:43.399 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.399 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.399 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.399 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:43.399 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.399 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.399 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.399 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:43.399 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:43.399 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:43.399 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.399 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.399 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:43.399 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:43.399 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:43.399 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.659 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.919 [2024-07-15 11:52:11.786678] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:25:43.919 11:52:11 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:44.487 [2024-07-15 11:52:12.506932] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:44.487 [2024-07-15 11:52:12.506952] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:44.487 [2024-07-15 11:52:12.506968] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:44.746 [2024-07-15 11:52:12.593232] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:44.746 [2024-07-15 11:52:12.819933] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:44.746 [2024-07-15 11:52:12.819955] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:45.005 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.277 [2024-07-15 11:52:13.286714] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:45.277 [2024-07-15 11:52:13.287488] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:45.277 [2024-07-15 11:52:13.287510] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:45.277 [2024-07-15 11:52:13.373079] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:45.277 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.539 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:45.539 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:45.539 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:45.539 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:45.539 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:45.539 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:45.539 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:45.539 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:45.539 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:45.539 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:45.539 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:45.539 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.539 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:45.539 11:52:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:45.539 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.539 [2024-07-15 11:52:13.431685] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:45.539 [2024-07-15 11:52:13.431702] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:45.539 [2024-07-15 11:52:13.431710] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:45.539 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:45.539 11:52:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:25:46.477 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:46.477 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.478 [2024-07-15 11:52:14.558986] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:46.478 [2024-07-15 11:52:14.559008] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:46.478 [2024-07-15 11:52:14.561740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.478 [2024-07-15 11:52:14.561762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.478 [2024-07-15 11:52:14.561773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.478 [2024-07-15 11:52:14.561783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.478 [2024-07-15 11:52:14.561793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.478 [2024-07-15 11:52:14.561802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.478 [2024-07-15 11:52:14.561812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.478 [2024-07-15 11:52:14.561821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.478 [2024-07-15 11:52:14.561831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223dfb0 is same with the state(5) to be set 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:46.478 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:46.478 [2024-07-15 11:52:14.571752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223dfb0 (9): Bad file descriptor 00:25:46.478 [2024-07-15 11:52:14.581791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:46.478 [2024-07-15 11:52:14.582147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.478 [2024-07-15 11:52:14.582165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223dfb0 with addr=10.0.0.2, port=4420 00:25:46.478 [2024-07-15 11:52:14.582176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223dfb0 is same with the state(5) to be set 00:25:46.478 [2024-07-15 11:52:14.582191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223dfb0 (9): Bad file descriptor 00:25:46.478 [2024-07-15 11:52:14.582213] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:46.478 [2024-07-15 11:52:14.582223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:46.478 [2024-07-15 11:52:14.582234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:46.478 [2024-07-15 11:52:14.582247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.739 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.739 [2024-07-15 11:52:14.591856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:46.739 [2024-07-15 11:52:14.592081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-07-15 11:52:14.592095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223dfb0 with addr=10.0.0.2, port=4420 00:25:46.739 [2024-07-15 11:52:14.592105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223dfb0 is same with the state(5) to be set 00:25:46.739 [2024-07-15 11:52:14.592118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223dfb0 (9): Bad file descriptor 00:25:46.739 [2024-07-15 11:52:14.592131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:46.739 [2024-07-15 11:52:14.592139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:46.739 [2024-07-15 11:52:14.592149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:46.739 [2024-07-15 11:52:14.592160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.739 [2024-07-15 11:52:14.601908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:46.739 [2024-07-15 11:52:14.602263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-07-15 11:52:14.602278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223dfb0 with addr=10.0.0.2, port=4420 00:25:46.739 [2024-07-15 11:52:14.602289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223dfb0 is same with the state(5) to be set 00:25:46.739 [2024-07-15 11:52:14.602301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223dfb0 (9): Bad file descriptor 00:25:46.739 [2024-07-15 11:52:14.602329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:46.739 [2024-07-15 11:52:14.602339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:46.739 [2024-07-15 11:52:14.602349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:46.739 [2024-07-15 11:52:14.602364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.739 [2024-07-15 11:52:14.611964] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:46.739 [2024-07-15 11:52:14.612281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-07-15 11:52:14.612296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223dfb0 with addr=10.0.0.2, port=4420 00:25:46.739 [2024-07-15 11:52:14.612305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223dfb0 is same with the state(5) to be set 00:25:46.739 [2024-07-15 11:52:14.612318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223dfb0 (9): Bad file descriptor 00:25:46.739 [2024-07-15 11:52:14.612330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:46.739 [2024-07-15 11:52:14.612338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:46.739 [2024-07-15 11:52:14.612348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:46.739 [2024-07-15 11:52:14.612359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.739 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.739 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:46.739 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:46.739 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:46.739 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:46.739 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:46.739 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:46.739 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:46.739 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:46.739 [2024-07-15 11:52:14.622028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:46.739 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:46.739 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.739 [2024-07-15 11:52:14.622342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-07-15 11:52:14.622359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223dfb0 with addr=10.0.0.2, port=4420 00:25:46.739 [2024-07-15 11:52:14.622368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223dfb0 is same with the state(5) to be set 00:25:46.739 [2024-07-15 11:52:14.622381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223dfb0 (9): Bad file descriptor 00:25:46.739 [2024-07-15 11:52:14.622394] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:46.739 [2024-07-15 11:52:14.622403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:46.739 [2024-07-15 11:52:14.622412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:46.739 [2024-07-15 11:52:14.622430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.739 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:46.739 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.739 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:46.739 [2024-07-15 11:52:14.632081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:46.739 [2024-07-15 11:52:14.632312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-07-15 11:52:14.632327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223dfb0 with addr=10.0.0.2, port=4420 00:25:46.739 [2024-07-15 11:52:14.632337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223dfb0 is same with the state(5) to be set 00:25:46.739 [2024-07-15 11:52:14.632351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223dfb0 (9): Bad file descriptor 00:25:46.739 [2024-07-15 11:52:14.632363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:46.739 [2024-07-15 11:52:14.632372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:46.739 [2024-07-15 11:52:14.632381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:46.739 [2024-07-15 11:52:14.632393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.739 [2024-07-15 11:52:14.642144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:46.739 [2024-07-15 11:52:14.642355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-07-15 11:52:14.642368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223dfb0 with addr=10.0.0.2, port=4420 00:25:46.739 [2024-07-15 11:52:14.642378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223dfb0 is same with the state(5) to be set 00:25:46.739 [2024-07-15 11:52:14.642390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223dfb0 (9): Bad file descriptor 00:25:46.739 [2024-07-15 11:52:14.642402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:46.739 [2024-07-15 11:52:14.642410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:46.739 [2024-07-15 11:52:14.642420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:46.739 [2024-07-15 11:52:14.642431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.739 [2024-07-15 11:52:14.652195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:46.739 [2024-07-15 11:52:14.652528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-07-15 11:52:14.652542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223dfb0 with addr=10.0.0.2, port=4420 00:25:46.739 [2024-07-15 11:52:14.652552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223dfb0 is same with the state(5) to be set 00:25:46.739 [2024-07-15 11:52:14.652564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223dfb0 (9): Bad file descriptor 00:25:46.739 [2024-07-15 11:52:14.652576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:46.739 [2024-07-15 11:52:14.652585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:46.739 [2024-07-15 11:52:14.652594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:46.739 [2024-07-15 11:52:14.652606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.739 [2024-07-15 11:52:14.662249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:46.739 [2024-07-15 11:52:14.662580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.739 [2024-07-15 11:52:14.662594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223dfb0 with addr=10.0.0.2, port=4420 00:25:46.740 [2024-07-15 11:52:14.662603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223dfb0 is same with the state(5) to be set 00:25:46.740 [2024-07-15 11:52:14.662619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223dfb0 (9): Bad file descriptor 00:25:46.740 [2024-07-15 11:52:14.662631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:46.740 [2024-07-15 11:52:14.662639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:46.740 [2024-07-15 11:52:14.662648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:46.740 [2024-07-15 11:52:14.662659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.740 [2024-07-15 11:52:14.672300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:46.740 [2024-07-15 11:52:14.672507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.740 [2024-07-15 11:52:14.672521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223dfb0 with addr=10.0.0.2, port=4420 00:25:46.740 [2024-07-15 11:52:14.672530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223dfb0 is same with the state(5) to be set 00:25:46.740 [2024-07-15 11:52:14.672542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223dfb0 (9): Bad file descriptor 00:25:46.740 [2024-07-15 11:52:14.672554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:46.740 [2024-07-15 11:52:14.672562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:46.740 [2024-07-15 11:52:14.672571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:46.740 [2024-07-15 11:52:14.672582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:46.740 [2024-07-15 11:52:14.682350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:25:46.740 [2024-07-15 11:52:14.682726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:46.740 [2024-07-15 11:52:14.682741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x223dfb0 with addr=10.0.0.2, port=4420 00:25:46.740 [2024-07-15 11:52:14.682751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x223dfb0 is same with the state(5) to be set 00:25:46.740 [2024-07-15 11:52:14.682763] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223dfb0 (9): Bad file descriptor 00:25:46.740 [2024-07-15 11:52:14.682782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:46.740 [2024-07-15 11:52:14.682791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:46.740 [2024-07-15 11:52:14.682800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:46.740 [2024-07-15 11:52:14.682811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:46.740 [2024-07-15 11:52:14.686794] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:46.740 [2024-07-15 11:52:14.686812] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:46.740 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.000 11:52:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:47.937 [2024-07-15 11:52:16.014418] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:47.937 [2024-07-15 11:52:16.014444] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:47.937 [2024-07-15 11:52:16.014459] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:48.196 [2024-07-15 11:52:16.100699] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:48.196 [2024-07-15 11:52:16.201460] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:48.196 [2024-07-15 11:52:16.201492] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.196 request: 00:25:48.196 { 00:25:48.196 "name": "nvme", 00:25:48.196 "trtype": "tcp", 00:25:48.196 "traddr": "10.0.0.2", 00:25:48.196 "adrfam": "ipv4", 00:25:48.196 "trsvcid": "8009", 00:25:48.196 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:48.196 "wait_for_attach": true, 00:25:48.196 "method": "bdev_nvme_start_discovery", 00:25:48.196 "req_id": 1 00:25:48.196 } 00:25:48.196 Got JSON-RPC error response 00:25:48.196 response: 00:25:48.196 { 00:25:48.196 "code": -17, 00:25:48.196 "message": "File exists" 00:25:48.196 } 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:48.196 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:48.197 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.197 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:48.197 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:48.197 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:48.197 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:48.197 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.197 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:48.197 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.197 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.456 request: 00:25:48.456 { 00:25:48.456 "name": "nvme_second", 00:25:48.456 "trtype": "tcp", 00:25:48.456 "traddr": "10.0.0.2", 00:25:48.456 "adrfam": "ipv4", 00:25:48.456 "trsvcid": "8009", 00:25:48.456 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:48.456 "wait_for_attach": true, 00:25:48.456 "method": "bdev_nvme_start_discovery", 00:25:48.456 "req_id": 1 00:25:48.456 } 00:25:48.456 Got JSON-RPC error response 00:25:48.456 response: 00:25:48.456 { 00:25:48.456 "code": -17, 00:25:48.456 "message": "File exists" 00:25:48.456 } 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.456 11:52:16 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.394 [2024-07-15 11:52:17.465593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.394 [2024-07-15 11:52:17.465628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227aa40 with addr=10.0.0.2, port=8010 00:25:49.394 [2024-07-15 11:52:17.465663] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:49.394 [2024-07-15 11:52:17.465672] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:49.394 [2024-07-15 11:52:17.465681] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:50.772 [2024-07-15 11:52:18.468037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.772 [2024-07-15 11:52:18.468065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x227aa40 with addr=10.0.0.2, port=8010 00:25:50.772 [2024-07-15 11:52:18.468084] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:50.772 [2024-07-15 11:52:18.468093] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:50.772 [2024-07-15 11:52:18.468101] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:51.750 [2024-07-15 11:52:19.470081] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:51.750 request: 00:25:51.750 { 00:25:51.750 "name": "nvme_second", 00:25:51.750 "trtype": "tcp", 00:25:51.750 "traddr": "10.0.0.2", 00:25:51.750 "adrfam": "ipv4", 00:25:51.750 "trsvcid": "8010", 00:25:51.750 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:51.750 "wait_for_attach": false, 00:25:51.750 "attach_timeout_ms": 3000, 00:25:51.750 "method": "bdev_nvme_start_discovery", 00:25:51.750 "req_id": 1 00:25:51.750 } 00:25:51.750 Got JSON-RPC error response 00:25:51.750 response: 00:25:51.750 { 00:25:51.750 "code": -110, 00:25:51.750 "message": "Connection timed out" 00:25:51.750 } 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2080019 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:51.750 rmmod nvme_tcp 00:25:51.750 rmmod nvme_fabrics 00:25:51.750 rmmod nvme_keyring 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2079782 ']' 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2079782 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2079782 ']' 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2079782 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2079782 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2079782' 00:25:51.750 killing process with pid 2079782 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2079782 00:25:51.750 11:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2079782 00:25:52.010 11:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:52.011 11:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:52.011 11:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:52.011 11:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:52.011 11:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:52.011 11:52:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.011 11:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:52.011 11:52:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.919 11:52:21 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:53.919 00:25:53.919 real 0m19.416s 00:25:53.919 user 0m22.283s 00:25:53.919 sys 0m7.383s 00:25:53.919 11:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:53.919 11:52:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.919 ************************************ 00:25:53.919 END TEST nvmf_host_discovery 00:25:53.919 ************************************ 00:25:53.919 11:52:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:53.919 11:52:21 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:53.919 11:52:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:53.919 11:52:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:53.919 11:52:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:53.919 ************************************ 00:25:53.919 START TEST nvmf_host_multipath_status 00:25:53.919 ************************************ 00:25:53.919 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:54.178 * Looking for test storage... 00:25:54.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:54.178 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:54.178 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:54.178 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:54.178 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:54.178 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:54.178 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:54.178 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:54.178 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:54.178 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:54.178 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:54.178 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:54.178 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:54.178 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:25:54.178 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:25:54.178 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:54.178 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:54.178 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:54.178 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:54.178 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:54.178 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:54.179 11:52:22 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:00.744 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:00.744 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:00.744 Found net devices under 0000:af:00.0: cvl_0_0 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:00.744 Found net devices under 0000:af:00.1: cvl_0_1 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:00.744 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:00.745 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:00.745 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:00.745 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:00.745 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:00.745 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:00.745 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:00.745 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:00.745 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:01.004 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:01.004 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:01.004 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:01.004 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:01.004 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:01.004 11:52:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:01.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:01.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:26:01.004 00:26:01.004 --- 10.0.0.2 ping statistics --- 00:26:01.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.004 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:01.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:01.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:26:01.004 00:26:01.004 --- 10.0.0.1 ping statistics --- 00:26:01.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.004 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2085330 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2085330 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2085330 ']' 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:01.004 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:01.263 [2024-07-15 11:52:29.126038] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:26:01.263 [2024-07-15 11:52:29.126084] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.263 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.263 [2024-07-15 11:52:29.201265] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:01.263 [2024-07-15 11:52:29.270637] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:01.263 [2024-07-15 11:52:29.270681] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:01.263 [2024-07-15 11:52:29.270690] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:01.263 [2024-07-15 11:52:29.270699] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:01.263 [2024-07-15 11:52:29.270706] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:01.263 [2024-07-15 11:52:29.270760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.263 [2024-07-15 11:52:29.270762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:01.832 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:01.832 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:01.832 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:01.832 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:01.832 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:02.090 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:02.090 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2085330 00:26:02.090 11:52:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:02.090 [2024-07-15 11:52:30.114936] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:02.090 11:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:02.348 Malloc0 00:26:02.348 11:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:02.606 11:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:02.606 11:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:02.864 [2024-07-15 11:52:30.832757] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:02.864 11:52:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:03.121 [2024-07-15 11:52:30.997214] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:03.121 11:52:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:03.121 11:52:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2085737 00:26:03.121 11:52:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:03.121 11:52:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2085737 /var/tmp/bdevperf.sock 00:26:03.121 11:52:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2085737 ']' 00:26:03.121 11:52:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:03.121 11:52:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:03.121 11:52:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:03.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:03.121 11:52:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:03.121 11:52:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:04.056 11:52:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:04.056 11:52:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:04.056 11:52:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:04.056 11:52:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:04.623 Nvme0n1 00:26:04.623 11:52:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:04.882 Nvme0n1 00:26:04.883 11:52:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:04.883 11:52:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:06.786 11:52:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:06.786 11:52:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:07.044 11:52:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:07.324 11:52:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:08.261 11:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:08.261 11:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:08.261 11:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.261 11:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:08.520 11:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.520 11:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:08.520 11:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:08.520 11:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.520 11:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:08.520 11:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:08.520 11:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.520 11:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:08.779 11:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.779 11:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:08.779 11:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.779 11:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:09.038 11:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.038 11:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:09.038 11:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.038 11:52:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:09.038 11:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.038 11:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:09.038 11:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.038 11:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:09.296 11:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.296 11:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:09.296 11:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:09.555 11:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:09.813 11:52:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:10.749 11:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:10.749 11:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:10.749 11:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.749 11:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:11.008 11:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:11.008 11:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:11.008 11:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.008 11:52:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:11.008 11:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.008 11:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:11.008 11:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.008 11:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:11.266 11:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.266 11:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:11.266 11:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.266 11:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:11.524 11:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.524 11:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:11.524 11:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.524 11:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:11.524 11:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.524 11:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:11.524 11:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.524 11:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:11.783 11:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.783 11:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:11.783 11:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:12.075 11:52:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:12.075 11:52:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:13.450 11:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:13.450 11:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:13.450 11:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.450 11:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:13.450 11:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.450 11:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:13.450 11:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.450 11:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:13.450 11:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:13.450 11:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:13.450 11:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.450 11:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:13.708 11:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.708 11:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:13.708 11:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.708 11:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:13.967 11:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.967 11:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:13.967 11:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.967 11:52:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:13.967 11:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.967 11:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:13.967 11:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.967 11:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:14.225 11:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.225 11:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:14.225 11:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:14.485 11:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:14.485 11:52:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:15.913 11:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:15.913 11:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:15.913 11:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.913 11:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:15.913 11:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.913 11:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:15.913 11:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:15.913 11:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.913 11:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.913 11:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:15.913 11:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:15.913 11:52:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.172 11:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.172 11:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:16.172 11:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.172 11:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:16.432 11:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.432 11:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:16.432 11:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.432 11:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:16.432 11:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.432 11:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:16.432 11:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.432 11:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:16.691 11:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:16.691 11:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:16.691 11:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:16.950 11:52:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:16.950 11:52:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:18.328 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:18.328 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:18.328 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.328 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:18.329 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:18.329 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:18.329 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.329 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:18.329 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:18.329 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:18.329 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.329 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:18.587 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.587 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:18.587 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.587 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:18.846 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.846 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:18.846 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.846 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:18.846 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:18.846 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:18.846 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.846 11:52:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:19.105 11:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:19.105 11:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:19.105 11:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:19.364 11:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:19.364 11:52:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:20.743 11:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:20.743 11:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:20.743 11:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.743 11:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:20.743 11:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:20.743 11:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:20.743 11:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.743 11:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:20.743 11:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.743 11:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:20.743 11:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.743 11:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:21.002 11:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.002 11:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:21.002 11:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.002 11:52:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:21.261 11:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.261 11:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:21.261 11:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.261 11:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:21.261 11:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:21.261 11:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:21.261 11:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.261 11:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:21.521 11:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.521 11:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:21.780 11:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:21.780 11:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:22.040 11:52:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:22.040 11:52:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:23.417 11:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:23.417 11:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:23.417 11:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.417 11:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:23.417 11:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.417 11:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:23.417 11:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.417 11:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:23.417 11:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.417 11:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:23.417 11:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.417 11:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:23.677 11:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.677 11:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:23.677 11:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.677 11:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:23.936 11:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.936 11:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:23.936 11:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.936 11:52:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:23.936 11:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.936 11:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:23.936 11:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.936 11:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:24.195 11:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.195 11:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:24.195 11:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:24.454 11:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:24.713 11:52:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:25.654 11:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:25.654 11:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:25.654 11:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.654 11:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:25.913 11:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:25.913 11:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:25.913 11:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.913 11:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:25.913 11:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.913 11:52:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:25.913 11:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.913 11:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:26.171 11:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.171 11:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:26.171 11:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.171 11:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:26.430 11:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.430 11:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:26.430 11:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.430 11:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:26.689 11:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.689 11:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:26.689 11:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.689 11:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:26.689 11:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.689 11:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:26.689 11:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:26.948 11:52:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:27.207 11:52:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:28.145 11:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:28.145 11:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:28.145 11:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.145 11:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:28.405 11:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.405 11:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:28.405 11:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.405 11:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:28.405 11:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.405 11:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:28.664 11:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:28.664 11:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.664 11:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.664 11:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:28.664 11:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.664 11:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:28.924 11:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.924 11:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:28.924 11:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.924 11:52:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:29.184 11:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.184 11:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:29.184 11:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.184 11:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:29.184 11:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.184 11:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:29.184 11:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:29.443 11:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:29.703 11:52:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:30.641 11:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:30.641 11:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:30.641 11:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.641 11:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:30.900 11:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.900 11:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:30.900 11:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:30.900 11:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.900 11:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.900 11:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:30.900 11:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.900 11:52:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:31.158 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.159 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:31.159 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.159 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:31.417 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.417 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:31.417 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.417 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:31.675 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.675 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:31.675 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.675 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:31.675 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:31.675 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2085737 00:26:31.675 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2085737 ']' 00:26:31.675 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2085737 00:26:31.675 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:31.675 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:31.675 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2085737 00:26:31.936 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:26:31.936 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:26:31.936 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2085737' 00:26:31.936 killing process with pid 2085737 00:26:31.936 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2085737 00:26:31.936 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2085737 00:26:31.936 Connection closed with partial response: 00:26:31.936 00:26:31.936 00:26:31.936 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2085737 00:26:31.936 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:31.936 [2024-07-15 11:52:31.047596] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:26:31.936 [2024-07-15 11:52:31.047652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2085737 ] 00:26:31.936 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.936 [2024-07-15 11:52:31.114867] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.936 [2024-07-15 11:52:31.185531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:31.936 Running I/O for 90 seconds... 00:26:31.936 [2024-07-15 11:52:44.839560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.936 [2024-07-15 11:52:44.839601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:31.936 [2024-07-15 11:52:44.839642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.936 [2024-07-15 11:52:44.839653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:31.936 [2024-07-15 11:52:44.839670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.936 [2024-07-15 11:52:44.839679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:31.936 [2024-07-15 11:52:44.839694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:124488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.936 [2024-07-15 11:52:44.839704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:31.936 [2024-07-15 11:52:44.839719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.936 [2024-07-15 11:52:44.839728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:31.936 [2024-07-15 11:52:44.839742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:124504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.936 [2024-07-15 11:52:44.839753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:31.936 [2024-07-15 11:52:44.839767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.936 [2024-07-15 11:52:44.839777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:31.936 [2024-07-15 11:52:44.839792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:124520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.936 [2024-07-15 11:52:44.839802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:31.936 [2024-07-15 11:52:44.840983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:124528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.936 [2024-07-15 11:52:44.841004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:31.936 [2024-07-15 11:52:44.841025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.936 [2024-07-15 11:52:44.841036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:31.936 [2024-07-15 11:52:44.841053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.936 [2024-07-15 11:52:44.841068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:31.936 [2024-07-15 11:52:44.841085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.936 [2024-07-15 11:52:44.841095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:31.936 [2024-07-15 11:52:44.841112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:124560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.936 [2024-07-15 11:52:44.841121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:31.936 [2024-07-15 11:52:44.841138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.936 [2024-07-15 11:52:44.841148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:31.936 [2024-07-15 11:52:44.841165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.936 [2024-07-15 11:52:44.841174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:31.936 [2024-07-15 11:52:44.841191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:124584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.936 [2024-07-15 11:52:44.841201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:31.936 [2024-07-15 11:52:44.841219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.936 [2024-07-15 11:52:44.841228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:31.936 [2024-07-15 11:52:44.841245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.936 [2024-07-15 11:52:44.841254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:31.936 [2024-07-15 11:52:44.841273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:124592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.936 [2024-07-15 11:52:44.841283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:31.936 [2024-07-15 11:52:44.841300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.936 [2024-07-15 11:52:44.841310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.841337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.841363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:124624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.841391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.841418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.841444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.841472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.841548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.841575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.841602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.841628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.841654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.841680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:124704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.841706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:124712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.841733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.841759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:124728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.841788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.841815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:124744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.841863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:124752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.841890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:124760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.841917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:124768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.841944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:124776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.841971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.841989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:124784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.842000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.842017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.842026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.842044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:124800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.842054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.842071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.842080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.842097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.842107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.842126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.842136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.842153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:124832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.842163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.842181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:124840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.842190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.842208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.842217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.842235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.842244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.842262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.937 [2024-07-15 11:52:44.842271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:31.937 [2024-07-15 11:52:44.842289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.842299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.842316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.842325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.842428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.842440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.842461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.842470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.842491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.842500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.842522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.842532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.842552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.842563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.842583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.842593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.842613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.842622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.842642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.842652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.842672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.842682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.842702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.842711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.842732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.842742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.842763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.842773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.842793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.842803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.842823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.842836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.842857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.842867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.842887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.842896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.842916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.842927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.842947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.842956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.842976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.842986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.843007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.843016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.843036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.843046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.843066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.843075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.843095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.843104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.843125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.843134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.843154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.843163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.843184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.843194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.843215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.843225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.843245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.843255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.843275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.843285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.843306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.843316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.843336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.843345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.843366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.843375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:44.843395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:44.843405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:57.577721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-07-15 11:52:57.577762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:57.577798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-07-15 11:52:57.577809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:57.577824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-07-15 11:52:57.577838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:57.577854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.938 [2024-07-15 11:52:57.577880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:31.938 [2024-07-15 11:52:57.578913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.938 [2024-07-15 11:52:57.578933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:31.939 [2024-07-15 11:52:57.578953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.939 [2024-07-15 11:52:57.578964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:31.939 [2024-07-15 11:52:57.581262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.939 [2024-07-15 11:52:57.581284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:31.939 [2024-07-15 11:52:57.581302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.939 [2024-07-15 11:52:57.581313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:31.939 [2024-07-15 11:52:57.581332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.939 [2024-07-15 11:52:57.581342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:31.939 [2024-07-15 11:52:57.581356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.939 [2024-07-15 11:52:57.581365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:31.939 [2024-07-15 11:52:57.581380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.939 [2024-07-15 11:52:57.581390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:31.939 [2024-07-15 11:52:57.581405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.939 [2024-07-15 11:52:57.581414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:31.939 [2024-07-15 11:52:57.581428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.939 [2024-07-15 11:52:57.581437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:31.939 [2024-07-15 11:52:57.581452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.939 [2024-07-15 11:52:57.581462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:31.939 [2024-07-15 11:52:57.581476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.939 [2024-07-15 11:52:57.581485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:31.939 [2024-07-15 11:52:57.581499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.939 [2024-07-15 11:52:57.581508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:31.939 [2024-07-15 11:52:57.581524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.939 [2024-07-15 11:52:57.581534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:31.939 [2024-07-15 11:52:57.581548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.939 [2024-07-15 11:52:57.581557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:31.939 [2024-07-15 11:52:57.581571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:31.939 [2024-07-15 11:52:57.581581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:31.939 Received shutdown signal, test time was about 26.835062 seconds 00:26:31.939 00:26:31.939 Latency(us) 00:26:31.939 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.939 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:31.939 Verification LBA range: start 0x0 length 0x4000 00:26:31.939 Nvme0n1 : 26.83 11194.58 43.73 0.00 0.00 11414.46 403.05 3019898.88 00:26:31.939 =================================================================================================================== 00:26:31.939 Total : 11194.58 43.73 0.00 0.00 11414.46 403.05 3019898.88 00:26:31.939 11:52:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:32.198 rmmod nvme_tcp 00:26:32.198 rmmod nvme_fabrics 00:26:32.198 rmmod nvme_keyring 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2085330 ']' 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2085330 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2085330 ']' 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2085330 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2085330 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2085330' 00:26:32.198 killing process with pid 2085330 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2085330 00:26:32.198 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2085330 00:26:32.457 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:32.457 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:32.457 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:32.457 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:32.457 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:32.457 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.457 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:32.457 11:53:00 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.991 11:53:02 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:34.991 00:26:34.991 real 0m40.558s 00:26:34.991 user 1m42.520s 00:26:34.991 sys 0m14.755s 00:26:34.991 11:53:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:34.991 11:53:02 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:34.991 ************************************ 00:26:34.991 END TEST nvmf_host_multipath_status 00:26:34.991 ************************************ 00:26:34.991 11:53:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:34.991 11:53:02 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:34.991 11:53:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:34.991 11:53:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:34.991 11:53:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:34.991 ************************************ 00:26:34.991 START TEST nvmf_discovery_remove_ifc 00:26:34.991 ************************************ 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:34.991 * Looking for test storage... 00:26:34.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:34.991 11:53:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:41.611 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.611 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:41.612 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:41.612 Found net devices under 0000:af:00.0: cvl_0_0 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:41.612 Found net devices under 0000:af:00.1: cvl_0_1 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:41.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:26:41.612 00:26:41.612 --- 10.0.0.2 ping statistics --- 00:26:41.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.612 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:26:41.612 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:41.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:26:41.872 00:26:41.872 --- 10.0.0.1 ping statistics --- 00:26:41.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.872 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:26:41.872 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.872 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:41.872 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:41.872 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.872 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:41.872 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:41.872 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.872 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:41.872 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:41.872 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:41.872 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:41.872 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:41.872 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.872 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2094352 00:26:41.872 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:41.872 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2094352 00:26:41.872 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2094352 ']' 00:26:41.872 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.872 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:41.872 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.872 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:41.872 11:53:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.872 [2024-07-15 11:53:09.819795] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:26:41.872 [2024-07-15 11:53:09.819846] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.872 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.872 [2024-07-15 11:53:09.892385] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.872 [2024-07-15 11:53:09.960894] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.872 [2024-07-15 11:53:09.960939] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.872 [2024-07-15 11:53:09.960948] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.872 [2024-07-15 11:53:09.960956] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.872 [2024-07-15 11:53:09.960979] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.872 [2024-07-15 11:53:09.961001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.810 11:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:42.810 11:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:42.810 11:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:42.810 11:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:42.810 11:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.810 11:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:42.810 11:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:42.810 11:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.810 11:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.810 [2024-07-15 11:53:10.667454] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.810 [2024-07-15 11:53:10.675623] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:42.810 null0 00:26:42.810 [2024-07-15 11:53:10.707597] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:42.810 11:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.810 11:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2094626 00:26:42.810 11:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2094626 /tmp/host.sock 00:26:42.810 11:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2094626 ']' 00:26:42.810 11:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:42.810 11:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:42.810 11:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:42.810 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:42.810 11:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:42.810 11:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:42.810 11:53:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:42.810 [2024-07-15 11:53:10.774980] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:26:42.810 [2024-07-15 11:53:10.775025] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2094626 ] 00:26:42.810 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.810 [2024-07-15 11:53:10.842524] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.810 [2024-07-15 11:53:10.913087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.745 11:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:43.745 11:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:26:43.745 11:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:43.745 11:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:43.745 11:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.745 11:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.745 11:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.745 11:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:43.745 11:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.745 11:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:43.745 11:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.745 11:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:43.745 11:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.745 11:53:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.681 [2024-07-15 11:53:12.696566] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:44.681 [2024-07-15 11:53:12.696588] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:44.681 [2024-07-15 11:53:12.696601] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:44.941 [2024-07-15 11:53:12.826047] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:44.941 [2024-07-15 11:53:12.887182] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:44.941 [2024-07-15 11:53:12.887223] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:44.941 [2024-07-15 11:53:12.887244] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:44.941 [2024-07-15 11:53:12.887258] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:44.941 [2024-07-15 11:53:12.887278] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:44.941 11:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.941 11:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:44.941 11:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:44.941 11:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.941 11:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:44.941 11:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.941 11:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:44.941 11:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.941 11:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:44.941 [2024-07-15 11:53:12.894991] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1cf8d40 was disconnected and freed. delete nvme_qpair. 00:26:44.941 11:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.941 11:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:44.941 11:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:44.941 11:53:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:45.200 11:53:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:45.200 11:53:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:45.200 11:53:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.200 11:53:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:45.200 11:53:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:45.200 11:53:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:45.200 11:53:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:45.200 11:53:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:45.200 11:53:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:45.200 11:53:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:45.200 11:53:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:46.136 11:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:46.136 11:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:46.136 11:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:46.136 11:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:46.136 11:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.136 11:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:46.136 11:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:46.136 11:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.136 11:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:46.136 11:53:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:47.511 11:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:47.511 11:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.511 11:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:47.511 11:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.511 11:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.511 11:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:47.511 11:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:47.511 11:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.511 11:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:47.511 11:53:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:48.446 11:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:48.446 11:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.446 11:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:48.446 11:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:48.446 11:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:48.446 11:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:48.446 11:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:48.446 11:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:48.446 11:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:48.446 11:53:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:49.382 11:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:49.382 11:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.382 11:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:49.382 11:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:49.382 11:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:49.382 11:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.382 11:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:49.382 11:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:49.382 11:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:49.382 11:53:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:50.315 [2024-07-15 11:53:18.328272] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:50.315 [2024-07-15 11:53:18.328310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.315 [2024-07-15 11:53:18.328323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.315 [2024-07-15 11:53:18.328333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.315 [2024-07-15 11:53:18.328342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.315 [2024-07-15 11:53:18.328352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.315 [2024-07-15 11:53:18.328361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.315 [2024-07-15 11:53:18.328370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.315 [2024-07-15 11:53:18.328379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.315 [2024-07-15 11:53:18.328389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:50.315 [2024-07-15 11:53:18.328398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:50.315 [2024-07-15 11:53:18.328407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbf720 is same with the state(5) to be set 00:26:50.315 [2024-07-15 11:53:18.338293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbf720 (9): Bad file descriptor 00:26:50.315 11:53:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:50.315 11:53:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:50.315 11:53:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:50.315 11:53:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:50.315 11:53:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.315 11:53:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.315 11:53:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:50.315 [2024-07-15 11:53:18.348331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:51.686 [2024-07-15 11:53:19.392849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:51.686 [2024-07-15 11:53:19.392898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cbf720 with addr=10.0.0.2, port=4420 00:26:51.686 [2024-07-15 11:53:19.392917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbf720 is same with the state(5) to be set 00:26:51.686 [2024-07-15 11:53:19.392945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbf720 (9): Bad file descriptor 00:26:51.686 [2024-07-15 11:53:19.393328] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:51.686 [2024-07-15 11:53:19.393350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:51.686 [2024-07-15 11:53:19.393363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:51.686 [2024-07-15 11:53:19.393377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:51.686 [2024-07-15 11:53:19.393397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:51.686 [2024-07-15 11:53:19.393410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:51.686 11:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:51.686 11:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:51.686 11:53:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:52.620 [2024-07-15 11:53:20.395879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:52.620 [2024-07-15 11:53:20.395911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:52.620 [2024-07-15 11:53:20.395921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:52.620 [2024-07-15 11:53:20.395932] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:52.620 [2024-07-15 11:53:20.395948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:52.620 [2024-07-15 11:53:20.395970] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:52.620 [2024-07-15 11:53:20.395996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.620 [2024-07-15 11:53:20.396008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-07-15 11:53:20.396020] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.620 [2024-07-15 11:53:20.396030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-07-15 11:53:20.396040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.620 [2024-07-15 11:53:20.396049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-07-15 11:53:20.396059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.620 [2024-07-15 11:53:20.396068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-07-15 11:53:20.396077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:52.620 [2024-07-15 11:53:20.396087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:52.620 [2024-07-15 11:53:20.396096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:52.620 [2024-07-15 11:53:20.396161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbeba0 (9): Bad file descriptor 00:26:52.620 [2024-07-15 11:53:20.397177] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:52.620 [2024-07-15 11:53:20.397190] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:52.620 11:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:52.620 11:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.620 11:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:52.620 11:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.620 11:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:52.620 11:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:52.620 11:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:52.620 11:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.620 11:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:52.620 11:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:52.620 11:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:52.620 11:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:52.620 11:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:52.620 11:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.620 11:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:52.620 11:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.620 11:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:52.620 11:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:52.620 11:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:52.620 11:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.620 11:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:52.620 11:53:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:53.556 11:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:53.556 11:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:53.556 11:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:53.556 11:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:53.556 11:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:53.556 11:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:53.556 11:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:53.556 11:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:53.814 11:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:53.814 11:53:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:54.380 [2024-07-15 11:53:22.451436] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:54.380 [2024-07-15 11:53:22.451456] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:54.380 [2024-07-15 11:53:22.451469] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:54.638 [2024-07-15 11:53:22.579867] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:54.638 [2024-07-15 11:53:22.642237] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:54.638 [2024-07-15 11:53:22.642271] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:54.638 [2024-07-15 11:53:22.642289] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:54.638 [2024-07-15 11:53:22.642303] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:54.638 [2024-07-15 11:53:22.642312] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:54.638 [2024-07-15 11:53:22.650313] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1cae620 was disconnected and freed. delete nvme_qpair. 00:26:54.638 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:54.638 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:54.638 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.638 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:54.639 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:54.639 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:54.639 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.639 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:54.898 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:54.898 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:54.898 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2094626 00:26:54.898 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2094626 ']' 00:26:54.898 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2094626 00:26:54.898 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:54.898 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:54.898 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2094626 00:26:54.898 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:54.898 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:54.898 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2094626' 00:26:54.898 killing process with pid 2094626 00:26:54.898 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2094626 00:26:54.898 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2094626 00:26:54.898 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:54.898 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:54.898 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:54.898 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:54.898 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:54.898 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:54.898 11:53:22 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:54.898 rmmod nvme_tcp 00:26:54.898 rmmod nvme_fabrics 00:26:55.158 rmmod nvme_keyring 00:26:55.158 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:55.158 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:55.158 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:55.158 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2094352 ']' 00:26:55.158 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2094352 00:26:55.158 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2094352 ']' 00:26:55.158 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2094352 00:26:55.158 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:26:55.158 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:55.158 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2094352 00:26:55.158 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:55.158 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:55.158 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2094352' 00:26:55.158 killing process with pid 2094352 00:26:55.158 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2094352 00:26:55.158 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2094352 00:26:55.416 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:55.416 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:55.416 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:55.416 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:55.416 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:55.416 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.416 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:55.416 11:53:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.321 11:53:25 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:57.321 00:26:57.321 real 0m22.695s 00:26:57.321 user 0m26.382s 00:26:57.321 sys 0m7.349s 00:26:57.321 11:53:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:57.321 11:53:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.321 ************************************ 00:26:57.321 END TEST nvmf_discovery_remove_ifc 00:26:57.321 ************************************ 00:26:57.321 11:53:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:57.321 11:53:25 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:57.321 11:53:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:57.321 11:53:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:57.321 11:53:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:57.580 ************************************ 00:26:57.580 START TEST nvmf_identify_kernel_target 00:26:57.580 ************************************ 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:57.580 * Looking for test storage... 00:26:57.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:57.580 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:57.581 11:53:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:04.219 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:04.219 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:04.219 Found net devices under 0000:af:00.0: cvl_0_0 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.219 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:04.219 Found net devices under 0000:af:00.1: cvl_0_1 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:04.220 11:53:31 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:04.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:27:04.220 00:27:04.220 --- 10.0.0.2 ping statistics --- 00:27:04.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.220 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:04.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.257 ms 00:27:04.220 00:27:04.220 --- 10.0.0.1 ping statistics --- 00:27:04.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.220 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:04.220 11:53:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:07.510 Waiting for block devices as requested 00:27:07.510 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:07.510 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:07.510 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:07.510 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:07.510 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:07.510 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:07.510 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:07.769 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:07.769 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:07.769 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:08.028 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:08.028 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:08.028 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:08.028 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:08.287 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:08.287 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:08.287 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:08.546 No valid GPT data, bailing 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:08.546 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:27:08.808 00:27:08.808 Discovery Log Number of Records 2, Generation counter 2 00:27:08.808 =====Discovery Log Entry 0====== 00:27:08.808 trtype: tcp 00:27:08.808 adrfam: ipv4 00:27:08.808 subtype: current discovery subsystem 00:27:08.808 treq: not specified, sq flow control disable supported 00:27:08.808 portid: 1 00:27:08.808 trsvcid: 4420 00:27:08.808 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:08.808 traddr: 10.0.0.1 00:27:08.808 eflags: none 00:27:08.808 sectype: none 00:27:08.808 =====Discovery Log Entry 1====== 00:27:08.808 trtype: tcp 00:27:08.808 adrfam: ipv4 00:27:08.808 subtype: nvme subsystem 00:27:08.808 treq: not specified, sq flow control disable supported 00:27:08.808 portid: 1 00:27:08.808 trsvcid: 4420 00:27:08.808 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:08.808 traddr: 10.0.0.1 00:27:08.808 eflags: none 00:27:08.808 sectype: none 00:27:08.808 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:08.808 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:08.808 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.808 ===================================================== 00:27:08.808 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:08.808 ===================================================== 00:27:08.808 Controller Capabilities/Features 00:27:08.808 ================================ 00:27:08.808 Vendor ID: 0000 00:27:08.808 Subsystem Vendor ID: 0000 00:27:08.808 Serial Number: 890ef8581ba1afae6cbb 00:27:08.808 Model Number: Linux 00:27:08.808 Firmware Version: 6.7.0-68 00:27:08.808 Recommended Arb Burst: 0 00:27:08.808 IEEE OUI Identifier: 00 00 00 00:27:08.808 Multi-path I/O 00:27:08.808 May have multiple subsystem ports: No 00:27:08.808 May have multiple controllers: No 00:27:08.808 Associated with SR-IOV VF: No 00:27:08.808 Max Data Transfer Size: Unlimited 00:27:08.808 Max Number of Namespaces: 0 00:27:08.808 Max Number of I/O Queues: 1024 00:27:08.808 NVMe Specification Version (VS): 1.3 00:27:08.808 NVMe Specification Version (Identify): 1.3 00:27:08.808 Maximum Queue Entries: 1024 00:27:08.808 Contiguous Queues Required: No 00:27:08.808 Arbitration Mechanisms Supported 00:27:08.808 Weighted Round Robin: Not Supported 00:27:08.808 Vendor Specific: Not Supported 00:27:08.808 Reset Timeout: 7500 ms 00:27:08.808 Doorbell Stride: 4 bytes 00:27:08.808 NVM Subsystem Reset: Not Supported 00:27:08.808 Command Sets Supported 00:27:08.808 NVM Command Set: Supported 00:27:08.808 Boot Partition: Not Supported 00:27:08.808 Memory Page Size Minimum: 4096 bytes 00:27:08.808 Memory Page Size Maximum: 4096 bytes 00:27:08.808 Persistent Memory Region: Not Supported 00:27:08.808 Optional Asynchronous Events Supported 00:27:08.808 Namespace Attribute Notices: Not Supported 00:27:08.808 Firmware Activation Notices: Not Supported 00:27:08.808 ANA Change Notices: Not Supported 00:27:08.808 PLE Aggregate Log Change Notices: Not Supported 00:27:08.808 LBA Status Info Alert Notices: Not Supported 00:27:08.808 EGE Aggregate Log Change Notices: Not Supported 00:27:08.808 Normal NVM Subsystem Shutdown event: Not Supported 00:27:08.808 Zone Descriptor Change Notices: Not Supported 00:27:08.808 Discovery Log Change Notices: Supported 00:27:08.808 Controller Attributes 00:27:08.808 128-bit Host Identifier: Not Supported 00:27:08.808 Non-Operational Permissive Mode: Not Supported 00:27:08.808 NVM Sets: Not Supported 00:27:08.808 Read Recovery Levels: Not Supported 00:27:08.808 Endurance Groups: Not Supported 00:27:08.808 Predictable Latency Mode: Not Supported 00:27:08.808 Traffic Based Keep ALive: Not Supported 00:27:08.808 Namespace Granularity: Not Supported 00:27:08.808 SQ Associations: Not Supported 00:27:08.808 UUID List: Not Supported 00:27:08.808 Multi-Domain Subsystem: Not Supported 00:27:08.808 Fixed Capacity Management: Not Supported 00:27:08.808 Variable Capacity Management: Not Supported 00:27:08.808 Delete Endurance Group: Not Supported 00:27:08.808 Delete NVM Set: Not Supported 00:27:08.808 Extended LBA Formats Supported: Not Supported 00:27:08.808 Flexible Data Placement Supported: Not Supported 00:27:08.808 00:27:08.808 Controller Memory Buffer Support 00:27:08.808 ================================ 00:27:08.808 Supported: No 00:27:08.808 00:27:08.808 Persistent Memory Region Support 00:27:08.808 ================================ 00:27:08.808 Supported: No 00:27:08.808 00:27:08.808 Admin Command Set Attributes 00:27:08.808 ============================ 00:27:08.808 Security Send/Receive: Not Supported 00:27:08.808 Format NVM: Not Supported 00:27:08.808 Firmware Activate/Download: Not Supported 00:27:08.808 Namespace Management: Not Supported 00:27:08.808 Device Self-Test: Not Supported 00:27:08.808 Directives: Not Supported 00:27:08.808 NVMe-MI: Not Supported 00:27:08.808 Virtualization Management: Not Supported 00:27:08.808 Doorbell Buffer Config: Not Supported 00:27:08.808 Get LBA Status Capability: Not Supported 00:27:08.808 Command & Feature Lockdown Capability: Not Supported 00:27:08.808 Abort Command Limit: 1 00:27:08.808 Async Event Request Limit: 1 00:27:08.808 Number of Firmware Slots: N/A 00:27:08.808 Firmware Slot 1 Read-Only: N/A 00:27:08.808 Firmware Activation Without Reset: N/A 00:27:08.808 Multiple Update Detection Support: N/A 00:27:08.808 Firmware Update Granularity: No Information Provided 00:27:08.808 Per-Namespace SMART Log: No 00:27:08.808 Asymmetric Namespace Access Log Page: Not Supported 00:27:08.808 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:08.808 Command Effects Log Page: Not Supported 00:27:08.808 Get Log Page Extended Data: Supported 00:27:08.808 Telemetry Log Pages: Not Supported 00:27:08.808 Persistent Event Log Pages: Not Supported 00:27:08.808 Supported Log Pages Log Page: May Support 00:27:08.808 Commands Supported & Effects Log Page: Not Supported 00:27:08.808 Feature Identifiers & Effects Log Page:May Support 00:27:08.808 NVMe-MI Commands & Effects Log Page: May Support 00:27:08.808 Data Area 4 for Telemetry Log: Not Supported 00:27:08.808 Error Log Page Entries Supported: 1 00:27:08.808 Keep Alive: Not Supported 00:27:08.808 00:27:08.808 NVM Command Set Attributes 00:27:08.808 ========================== 00:27:08.808 Submission Queue Entry Size 00:27:08.808 Max: 1 00:27:08.808 Min: 1 00:27:08.808 Completion Queue Entry Size 00:27:08.808 Max: 1 00:27:08.808 Min: 1 00:27:08.808 Number of Namespaces: 0 00:27:08.808 Compare Command: Not Supported 00:27:08.808 Write Uncorrectable Command: Not Supported 00:27:08.808 Dataset Management Command: Not Supported 00:27:08.808 Write Zeroes Command: Not Supported 00:27:08.808 Set Features Save Field: Not Supported 00:27:08.808 Reservations: Not Supported 00:27:08.808 Timestamp: Not Supported 00:27:08.808 Copy: Not Supported 00:27:08.808 Volatile Write Cache: Not Present 00:27:08.808 Atomic Write Unit (Normal): 1 00:27:08.808 Atomic Write Unit (PFail): 1 00:27:08.808 Atomic Compare & Write Unit: 1 00:27:08.808 Fused Compare & Write: Not Supported 00:27:08.808 Scatter-Gather List 00:27:08.808 SGL Command Set: Supported 00:27:08.808 SGL Keyed: Not Supported 00:27:08.808 SGL Bit Bucket Descriptor: Not Supported 00:27:08.808 SGL Metadata Pointer: Not Supported 00:27:08.808 Oversized SGL: Not Supported 00:27:08.808 SGL Metadata Address: Not Supported 00:27:08.808 SGL Offset: Supported 00:27:08.808 Transport SGL Data Block: Not Supported 00:27:08.808 Replay Protected Memory Block: Not Supported 00:27:08.808 00:27:08.808 Firmware Slot Information 00:27:08.808 ========================= 00:27:08.808 Active slot: 0 00:27:08.808 00:27:08.808 00:27:08.808 Error Log 00:27:08.808 ========= 00:27:08.808 00:27:08.808 Active Namespaces 00:27:08.808 ================= 00:27:08.808 Discovery Log Page 00:27:08.808 ================== 00:27:08.808 Generation Counter: 2 00:27:08.808 Number of Records: 2 00:27:08.808 Record Format: 0 00:27:08.808 00:27:08.808 Discovery Log Entry 0 00:27:08.808 ---------------------- 00:27:08.808 Transport Type: 3 (TCP) 00:27:08.808 Address Family: 1 (IPv4) 00:27:08.808 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:08.809 Entry Flags: 00:27:08.809 Duplicate Returned Information: 0 00:27:08.809 Explicit Persistent Connection Support for Discovery: 0 00:27:08.809 Transport Requirements: 00:27:08.809 Secure Channel: Not Specified 00:27:08.809 Port ID: 1 (0x0001) 00:27:08.809 Controller ID: 65535 (0xffff) 00:27:08.809 Admin Max SQ Size: 32 00:27:08.809 Transport Service Identifier: 4420 00:27:08.809 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:08.809 Transport Address: 10.0.0.1 00:27:08.809 Discovery Log Entry 1 00:27:08.809 ---------------------- 00:27:08.809 Transport Type: 3 (TCP) 00:27:08.809 Address Family: 1 (IPv4) 00:27:08.809 Subsystem Type: 2 (NVM Subsystem) 00:27:08.809 Entry Flags: 00:27:08.809 Duplicate Returned Information: 0 00:27:08.809 Explicit Persistent Connection Support for Discovery: 0 00:27:08.809 Transport Requirements: 00:27:08.809 Secure Channel: Not Specified 00:27:08.809 Port ID: 1 (0x0001) 00:27:08.809 Controller ID: 65535 (0xffff) 00:27:08.809 Admin Max SQ Size: 32 00:27:08.809 Transport Service Identifier: 4420 00:27:08.809 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:08.809 Transport Address: 10.0.0.1 00:27:08.809 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:08.809 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.809 get_feature(0x01) failed 00:27:08.809 get_feature(0x02) failed 00:27:08.809 get_feature(0x04) failed 00:27:08.809 ===================================================== 00:27:08.809 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:08.809 ===================================================== 00:27:08.809 Controller Capabilities/Features 00:27:08.809 ================================ 00:27:08.809 Vendor ID: 0000 00:27:08.809 Subsystem Vendor ID: 0000 00:27:08.809 Serial Number: 3eaa9c5731d52c93d475 00:27:08.809 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:08.809 Firmware Version: 6.7.0-68 00:27:08.809 Recommended Arb Burst: 6 00:27:08.809 IEEE OUI Identifier: 00 00 00 00:27:08.809 Multi-path I/O 00:27:08.809 May have multiple subsystem ports: Yes 00:27:08.809 May have multiple controllers: Yes 00:27:08.809 Associated with SR-IOV VF: No 00:27:08.809 Max Data Transfer Size: Unlimited 00:27:08.809 Max Number of Namespaces: 1024 00:27:08.809 Max Number of I/O Queues: 128 00:27:08.809 NVMe Specification Version (VS): 1.3 00:27:08.809 NVMe Specification Version (Identify): 1.3 00:27:08.809 Maximum Queue Entries: 1024 00:27:08.809 Contiguous Queues Required: No 00:27:08.809 Arbitration Mechanisms Supported 00:27:08.809 Weighted Round Robin: Not Supported 00:27:08.809 Vendor Specific: Not Supported 00:27:08.809 Reset Timeout: 7500 ms 00:27:08.809 Doorbell Stride: 4 bytes 00:27:08.809 NVM Subsystem Reset: Not Supported 00:27:08.809 Command Sets Supported 00:27:08.809 NVM Command Set: Supported 00:27:08.809 Boot Partition: Not Supported 00:27:08.809 Memory Page Size Minimum: 4096 bytes 00:27:08.809 Memory Page Size Maximum: 4096 bytes 00:27:08.809 Persistent Memory Region: Not Supported 00:27:08.809 Optional Asynchronous Events Supported 00:27:08.809 Namespace Attribute Notices: Supported 00:27:08.809 Firmware Activation Notices: Not Supported 00:27:08.809 ANA Change Notices: Supported 00:27:08.809 PLE Aggregate Log Change Notices: Not Supported 00:27:08.809 LBA Status Info Alert Notices: Not Supported 00:27:08.809 EGE Aggregate Log Change Notices: Not Supported 00:27:08.809 Normal NVM Subsystem Shutdown event: Not Supported 00:27:08.809 Zone Descriptor Change Notices: Not Supported 00:27:08.809 Discovery Log Change Notices: Not Supported 00:27:08.809 Controller Attributes 00:27:08.809 128-bit Host Identifier: Supported 00:27:08.809 Non-Operational Permissive Mode: Not Supported 00:27:08.809 NVM Sets: Not Supported 00:27:08.809 Read Recovery Levels: Not Supported 00:27:08.809 Endurance Groups: Not Supported 00:27:08.809 Predictable Latency Mode: Not Supported 00:27:08.809 Traffic Based Keep ALive: Supported 00:27:08.809 Namespace Granularity: Not Supported 00:27:08.809 SQ Associations: Not Supported 00:27:08.809 UUID List: Not Supported 00:27:08.809 Multi-Domain Subsystem: Not Supported 00:27:08.809 Fixed Capacity Management: Not Supported 00:27:08.809 Variable Capacity Management: Not Supported 00:27:08.809 Delete Endurance Group: Not Supported 00:27:08.809 Delete NVM Set: Not Supported 00:27:08.809 Extended LBA Formats Supported: Not Supported 00:27:08.809 Flexible Data Placement Supported: Not Supported 00:27:08.809 00:27:08.809 Controller Memory Buffer Support 00:27:08.809 ================================ 00:27:08.809 Supported: No 00:27:08.809 00:27:08.809 Persistent Memory Region Support 00:27:08.809 ================================ 00:27:08.809 Supported: No 00:27:08.809 00:27:08.809 Admin Command Set Attributes 00:27:08.809 ============================ 00:27:08.809 Security Send/Receive: Not Supported 00:27:08.809 Format NVM: Not Supported 00:27:08.809 Firmware Activate/Download: Not Supported 00:27:08.809 Namespace Management: Not Supported 00:27:08.809 Device Self-Test: Not Supported 00:27:08.809 Directives: Not Supported 00:27:08.809 NVMe-MI: Not Supported 00:27:08.809 Virtualization Management: Not Supported 00:27:08.809 Doorbell Buffer Config: Not Supported 00:27:08.809 Get LBA Status Capability: Not Supported 00:27:08.809 Command & Feature Lockdown Capability: Not Supported 00:27:08.809 Abort Command Limit: 4 00:27:08.809 Async Event Request Limit: 4 00:27:08.809 Number of Firmware Slots: N/A 00:27:08.809 Firmware Slot 1 Read-Only: N/A 00:27:08.809 Firmware Activation Without Reset: N/A 00:27:08.809 Multiple Update Detection Support: N/A 00:27:08.809 Firmware Update Granularity: No Information Provided 00:27:08.809 Per-Namespace SMART Log: Yes 00:27:08.809 Asymmetric Namespace Access Log Page: Supported 00:27:08.809 ANA Transition Time : 10 sec 00:27:08.809 00:27:08.809 Asymmetric Namespace Access Capabilities 00:27:08.809 ANA Optimized State : Supported 00:27:08.809 ANA Non-Optimized State : Supported 00:27:08.809 ANA Inaccessible State : Supported 00:27:08.809 ANA Persistent Loss State : Supported 00:27:08.809 ANA Change State : Supported 00:27:08.809 ANAGRPID is not changed : No 00:27:08.809 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:08.809 00:27:08.809 ANA Group Identifier Maximum : 128 00:27:08.809 Number of ANA Group Identifiers : 128 00:27:08.809 Max Number of Allowed Namespaces : 1024 00:27:08.809 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:08.809 Command Effects Log Page: Supported 00:27:08.809 Get Log Page Extended Data: Supported 00:27:08.809 Telemetry Log Pages: Not Supported 00:27:08.809 Persistent Event Log Pages: Not Supported 00:27:08.809 Supported Log Pages Log Page: May Support 00:27:08.809 Commands Supported & Effects Log Page: Not Supported 00:27:08.809 Feature Identifiers & Effects Log Page:May Support 00:27:08.809 NVMe-MI Commands & Effects Log Page: May Support 00:27:08.809 Data Area 4 for Telemetry Log: Not Supported 00:27:08.809 Error Log Page Entries Supported: 128 00:27:08.809 Keep Alive: Supported 00:27:08.809 Keep Alive Granularity: 1000 ms 00:27:08.809 00:27:08.809 NVM Command Set Attributes 00:27:08.809 ========================== 00:27:08.809 Submission Queue Entry Size 00:27:08.809 Max: 64 00:27:08.809 Min: 64 00:27:08.809 Completion Queue Entry Size 00:27:08.809 Max: 16 00:27:08.809 Min: 16 00:27:08.809 Number of Namespaces: 1024 00:27:08.809 Compare Command: Not Supported 00:27:08.809 Write Uncorrectable Command: Not Supported 00:27:08.809 Dataset Management Command: Supported 00:27:08.809 Write Zeroes Command: Supported 00:27:08.809 Set Features Save Field: Not Supported 00:27:08.809 Reservations: Not Supported 00:27:08.809 Timestamp: Not Supported 00:27:08.809 Copy: Not Supported 00:27:08.809 Volatile Write Cache: Present 00:27:08.809 Atomic Write Unit (Normal): 1 00:27:08.809 Atomic Write Unit (PFail): 1 00:27:08.809 Atomic Compare & Write Unit: 1 00:27:08.809 Fused Compare & Write: Not Supported 00:27:08.809 Scatter-Gather List 00:27:08.809 SGL Command Set: Supported 00:27:08.809 SGL Keyed: Not Supported 00:27:08.809 SGL Bit Bucket Descriptor: Not Supported 00:27:08.809 SGL Metadata Pointer: Not Supported 00:27:08.809 Oversized SGL: Not Supported 00:27:08.809 SGL Metadata Address: Not Supported 00:27:08.809 SGL Offset: Supported 00:27:08.809 Transport SGL Data Block: Not Supported 00:27:08.809 Replay Protected Memory Block: Not Supported 00:27:08.809 00:27:08.809 Firmware Slot Information 00:27:08.809 ========================= 00:27:08.809 Active slot: 0 00:27:08.809 00:27:08.809 Asymmetric Namespace Access 00:27:08.809 =========================== 00:27:08.809 Change Count : 0 00:27:08.809 Number of ANA Group Descriptors : 1 00:27:08.809 ANA Group Descriptor : 0 00:27:08.809 ANA Group ID : 1 00:27:08.809 Number of NSID Values : 1 00:27:08.809 Change Count : 0 00:27:08.809 ANA State : 1 00:27:08.809 Namespace Identifier : 1 00:27:08.809 00:27:08.809 Commands Supported and Effects 00:27:08.809 ============================== 00:27:08.809 Admin Commands 00:27:08.809 -------------- 00:27:08.810 Get Log Page (02h): Supported 00:27:08.810 Identify (06h): Supported 00:27:08.810 Abort (08h): Supported 00:27:08.810 Set Features (09h): Supported 00:27:08.810 Get Features (0Ah): Supported 00:27:08.810 Asynchronous Event Request (0Ch): Supported 00:27:08.810 Keep Alive (18h): Supported 00:27:08.810 I/O Commands 00:27:08.810 ------------ 00:27:08.810 Flush (00h): Supported 00:27:08.810 Write (01h): Supported LBA-Change 00:27:08.810 Read (02h): Supported 00:27:08.810 Write Zeroes (08h): Supported LBA-Change 00:27:08.810 Dataset Management (09h): Supported 00:27:08.810 00:27:08.810 Error Log 00:27:08.810 ========= 00:27:08.810 Entry: 0 00:27:08.810 Error Count: 0x3 00:27:08.810 Submission Queue Id: 0x0 00:27:08.810 Command Id: 0x5 00:27:08.810 Phase Bit: 0 00:27:08.810 Status Code: 0x2 00:27:08.810 Status Code Type: 0x0 00:27:08.810 Do Not Retry: 1 00:27:08.810 Error Location: 0x28 00:27:08.810 LBA: 0x0 00:27:08.810 Namespace: 0x0 00:27:08.810 Vendor Log Page: 0x0 00:27:08.810 ----------- 00:27:08.810 Entry: 1 00:27:08.810 Error Count: 0x2 00:27:08.810 Submission Queue Id: 0x0 00:27:08.810 Command Id: 0x5 00:27:08.810 Phase Bit: 0 00:27:08.810 Status Code: 0x2 00:27:08.810 Status Code Type: 0x0 00:27:08.810 Do Not Retry: 1 00:27:08.810 Error Location: 0x28 00:27:08.810 LBA: 0x0 00:27:08.810 Namespace: 0x0 00:27:08.810 Vendor Log Page: 0x0 00:27:08.810 ----------- 00:27:08.810 Entry: 2 00:27:08.810 Error Count: 0x1 00:27:08.810 Submission Queue Id: 0x0 00:27:08.810 Command Id: 0x4 00:27:08.810 Phase Bit: 0 00:27:08.810 Status Code: 0x2 00:27:08.810 Status Code Type: 0x0 00:27:08.810 Do Not Retry: 1 00:27:08.810 Error Location: 0x28 00:27:08.810 LBA: 0x0 00:27:08.810 Namespace: 0x0 00:27:08.810 Vendor Log Page: 0x0 00:27:08.810 00:27:08.810 Number of Queues 00:27:08.810 ================ 00:27:08.810 Number of I/O Submission Queues: 128 00:27:08.810 Number of I/O Completion Queues: 128 00:27:08.810 00:27:08.810 ZNS Specific Controller Data 00:27:08.810 ============================ 00:27:08.810 Zone Append Size Limit: 0 00:27:08.810 00:27:08.810 00:27:08.810 Active Namespaces 00:27:08.810 ================= 00:27:08.810 get_feature(0x05) failed 00:27:08.810 Namespace ID:1 00:27:08.810 Command Set Identifier: NVM (00h) 00:27:08.810 Deallocate: Supported 00:27:08.810 Deallocated/Unwritten Error: Not Supported 00:27:08.810 Deallocated Read Value: Unknown 00:27:08.810 Deallocate in Write Zeroes: Not Supported 00:27:08.810 Deallocated Guard Field: 0xFFFF 00:27:08.810 Flush: Supported 00:27:08.810 Reservation: Not Supported 00:27:08.810 Namespace Sharing Capabilities: Multiple Controllers 00:27:08.810 Size (in LBAs): 3125627568 (1490GiB) 00:27:08.810 Capacity (in LBAs): 3125627568 (1490GiB) 00:27:08.810 Utilization (in LBAs): 3125627568 (1490GiB) 00:27:08.810 UUID: ccb73369-ea00-4fa6-875b-5b07d7e58aa8 00:27:08.810 Thin Provisioning: Not Supported 00:27:08.810 Per-NS Atomic Units: Yes 00:27:08.810 Atomic Boundary Size (Normal): 0 00:27:08.810 Atomic Boundary Size (PFail): 0 00:27:08.810 Atomic Boundary Offset: 0 00:27:08.810 NGUID/EUI64 Never Reused: No 00:27:08.810 ANA group ID: 1 00:27:08.810 Namespace Write Protected: No 00:27:08.810 Number of LBA Formats: 1 00:27:08.810 Current LBA Format: LBA Format #00 00:27:08.810 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:08.810 00:27:08.810 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:08.810 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:08.810 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:08.810 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:08.810 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:08.810 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:08.810 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:08.810 rmmod nvme_tcp 00:27:08.810 rmmod nvme_fabrics 00:27:08.810 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:08.810 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:08.810 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:08.810 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:08.810 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:08.810 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:08.810 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:08.810 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:08.810 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:08.810 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.810 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:08.810 11:53:36 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.347 11:53:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:11.347 11:53:38 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:11.347 11:53:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:11.347 11:53:38 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:11.347 11:53:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:11.347 11:53:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:11.347 11:53:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:11.347 11:53:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:11.347 11:53:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:11.347 11:53:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:11.347 11:53:39 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:14.632 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:14.632 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:14.632 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:14.632 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:14.632 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:14.632 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:14.633 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:14.633 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:14.633 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:27:14.633 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:27:14.633 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:27:14.633 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:27:14.633 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:27:14.633 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:27:14.633 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:27:14.633 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:27:16.032 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:27:16.292 00:27:16.292 real 0m18.800s 00:27:16.292 user 0m4.290s 00:27:16.292 sys 0m10.028s 00:27:16.292 11:53:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:16.292 11:53:44 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:16.292 ************************************ 00:27:16.292 END TEST nvmf_identify_kernel_target 00:27:16.292 ************************************ 00:27:16.292 11:53:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:16.292 11:53:44 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:16.292 11:53:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:16.292 11:53:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:16.292 11:53:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:16.292 ************************************ 00:27:16.292 START TEST nvmf_auth_host 00:27:16.292 ************************************ 00:27:16.292 11:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:16.551 * Looking for test storage... 00:27:16.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.551 11:53:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:16.552 11:53:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:23.124 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:23.124 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:23.124 Found net devices under 0000:af:00.0: cvl_0_0 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:23.124 Found net devices under 0000:af:00.1: cvl_0_1 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:23.124 11:53:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:23.124 11:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:23.124 11:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:23.124 11:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:23.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:23.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:27:23.124 00:27:23.124 --- 10.0.0.2 ping statistics --- 00:27:23.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.124 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:27:23.124 11:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:23.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:23.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:27:23.124 00:27:23.124 --- 10.0.0.1 ping statistics --- 00:27:23.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.124 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:27:23.124 11:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:23.124 11:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:23.124 11:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:23.124 11:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:23.124 11:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:23.124 11:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:23.124 11:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:23.124 11:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:23.124 11:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:23.124 11:53:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:23.124 11:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:23.124 11:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:23.124 11:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.124 11:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2107089 00:27:23.124 11:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:23.124 11:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2107089 00:27:23.125 11:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2107089 ']' 00:27:23.125 11:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.125 11:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:23.125 11:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.125 11:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:23.125 11:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.062 11:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:24.062 11:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:24.062 11:53:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:24.062 11:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:24.062 11:53:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4f5096f1239914f75ee05307cfdfec13 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Fqb 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4f5096f1239914f75ee05307cfdfec13 0 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4f5096f1239914f75ee05307cfdfec13 0 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4f5096f1239914f75ee05307cfdfec13 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Fqb 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Fqb 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Fqb 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a8a6636936ca16a8aab0e268cc271ce06e01e167b077961dbca5047e39769cf6 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.chF 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a8a6636936ca16a8aab0e268cc271ce06e01e167b077961dbca5047e39769cf6 3 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a8a6636936ca16a8aab0e268cc271ce06e01e167b077961dbca5047e39769cf6 3 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a8a6636936ca16a8aab0e268cc271ce06e01e167b077961dbca5047e39769cf6 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.chF 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.chF 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.chF 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b80fa42e05d09e586938ff4baefd2e1655e86d5c326e27b5 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.pkB 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b80fa42e05d09e586938ff4baefd2e1655e86d5c326e27b5 0 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b80fa42e05d09e586938ff4baefd2e1655e86d5c326e27b5 0 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b80fa42e05d09e586938ff4baefd2e1655e86d5c326e27b5 00:27:24.062 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.pkB 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.pkB 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.pkB 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b843a1fd7442b6aac71c90060047e0028ac3d93fb4931bcc 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.BI7 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b843a1fd7442b6aac71c90060047e0028ac3d93fb4931bcc 2 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b843a1fd7442b6aac71c90060047e0028ac3d93fb4931bcc 2 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b843a1fd7442b6aac71c90060047e0028ac3d93fb4931bcc 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.BI7 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.BI7 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.BI7 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7f11ff67f070cd3266e877496c3fb43a 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Vnr 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7f11ff67f070cd3266e877496c3fb43a 1 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7f11ff67f070cd3266e877496c3fb43a 1 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7f11ff67f070cd3266e877496c3fb43a 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Vnr 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Vnr 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Vnr 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=466210ae5c41a309fd381f22d775233c 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.FYF 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 466210ae5c41a309fd381f22d775233c 1 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 466210ae5c41a309fd381f22d775233c 1 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=466210ae5c41a309fd381f22d775233c 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.FYF 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.FYF 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.FYF 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=11d3a223a9756340d440122e0aed6f4677cdcf6a107e4af3 00:27:24.322 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.RHn 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 11d3a223a9756340d440122e0aed6f4677cdcf6a107e4af3 2 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 11d3a223a9756340d440122e0aed6f4677cdcf6a107e4af3 2 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=11d3a223a9756340d440122e0aed6f4677cdcf6a107e4af3 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.RHn 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.RHn 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.RHn 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3e31fdbfd2607033c7b1150c10259453 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.xUx 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3e31fdbfd2607033c7b1150c10259453 0 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3e31fdbfd2607033c7b1150c10259453 0 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:24.581 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3e31fdbfd2607033c7b1150c10259453 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.xUx 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.xUx 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.xUx 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=04dd7b1577b4b09a2f3d1e7559979924a24612ae791badb82ee8b7d688dccd1f 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.yF9 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 04dd7b1577b4b09a2f3d1e7559979924a24612ae791badb82ee8b7d688dccd1f 3 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 04dd7b1577b4b09a2f3d1e7559979924a24612ae791badb82ee8b7d688dccd1f 3 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=04dd7b1577b4b09a2f3d1e7559979924a24612ae791badb82ee8b7d688dccd1f 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.yF9 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.yF9 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.yF9 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2107089 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2107089 ']' 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:24.582 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.841 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:24.841 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:27:24.841 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:24.841 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Fqb 00:27:24.841 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.841 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.841 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.841 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.chF ]] 00:27:24.841 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.chF 00:27:24.841 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.841 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.841 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.841 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:24.841 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.pkB 00:27:24.841 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.841 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.841 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.841 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.BI7 ]] 00:27:24.841 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BI7 00:27:24.841 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.841 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Vnr 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.FYF ]] 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FYF 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.RHn 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.xUx ]] 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.xUx 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.yF9 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:24.842 11:53:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:28.128 Waiting for block devices as requested 00:27:28.128 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:28.128 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:28.387 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:28.387 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:28.387 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:28.387 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:28.646 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:28.646 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:28.646 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:27:28.905 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:27:28.905 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:27:28.905 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:27:29.163 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:27:29.163 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:27:29.163 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:27:29.421 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:27:29.421 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:27:30.356 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:30.357 No valid GPT data, bailing 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:27:30.357 00:27:30.357 Discovery Log Number of Records 2, Generation counter 2 00:27:30.357 =====Discovery Log Entry 0====== 00:27:30.357 trtype: tcp 00:27:30.357 adrfam: ipv4 00:27:30.357 subtype: current discovery subsystem 00:27:30.357 treq: not specified, sq flow control disable supported 00:27:30.357 portid: 1 00:27:30.357 trsvcid: 4420 00:27:30.357 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:30.357 traddr: 10.0.0.1 00:27:30.357 eflags: none 00:27:30.357 sectype: none 00:27:30.357 =====Discovery Log Entry 1====== 00:27:30.357 trtype: tcp 00:27:30.357 adrfam: ipv4 00:27:30.357 subtype: nvme subsystem 00:27:30.357 treq: not specified, sq flow control disable supported 00:27:30.357 portid: 1 00:27:30.357 trsvcid: 4420 00:27:30.357 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:30.357 traddr: 10.0.0.1 00:27:30.357 eflags: none 00:27:30.357 sectype: none 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: ]] 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.357 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.627 nvme0n1 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: ]] 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.627 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.900 nvme0n1 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: ]] 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:30.900 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.901 nvme0n1 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.901 11:53:58 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.159 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.159 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.159 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.159 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.159 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.159 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.159 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: ]] 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.160 nvme0n1 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.160 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: ]] 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.418 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.419 nvme0n1 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.419 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.677 nvme0n1 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: ]] 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.677 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.935 nvme0n1 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: ]] 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.935 11:53:59 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.193 nvme0n1 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: ]] 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.193 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.451 nvme0n1 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: ]] 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.451 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.710 nvme0n1 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.710 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.968 nvme0n1 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: ]] 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:32.968 11:54:00 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:32.968 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.968 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.968 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.226 nvme0n1 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: ]] 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.226 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.227 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.227 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.227 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.227 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.227 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.227 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.227 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.227 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.227 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.227 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.227 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.227 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.227 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.227 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.227 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.485 nvme0n1 00:27:33.485 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.485 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.485 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.485 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.485 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.485 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: ]] 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.743 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.002 nvme0n1 00:27:34.002 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.002 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.002 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.002 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.002 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.002 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.002 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.002 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.002 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.002 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.002 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.002 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.002 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:34.002 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.002 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.002 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.002 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.002 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:34.002 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:34.002 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: ]] 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.003 11:54:01 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.261 nvme0n1 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.261 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.519 nvme0n1 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: ]] 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.519 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.084 nvme0n1 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: ]] 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.085 11:54:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.085 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:35.085 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.085 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.085 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.085 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.085 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.085 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.085 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.085 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.085 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.085 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.085 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.085 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.085 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.085 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.085 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.085 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.085 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.342 nvme0n1 00:27:35.342 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.342 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.342 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.342 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.342 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.342 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.342 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.342 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.342 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.342 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: ]] 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.600 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.858 nvme0n1 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: ]] 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.858 11:54:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.425 nvme0n1 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.425 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.684 nvme0n1 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: ]] 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.684 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.943 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.943 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.943 11:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:36.943 11:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:36.943 11:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:36.943 11:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.943 11:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.943 11:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:36.943 11:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.943 11:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:36.943 11:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:36.943 11:54:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:36.943 11:54:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:36.943 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.943 11:54:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.510 nvme0n1 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: ]] 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.510 11:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.511 11:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.511 11:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.511 11:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.511 11:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.511 11:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.511 11:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.511 11:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.511 11:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.511 11:54:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.511 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:37.511 11:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:37.511 11:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.078 nvme0n1 00:27:38.078 11:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.078 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.078 11:54:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.078 11:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.078 11:54:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: ]] 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.078 11:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.646 nvme0n1 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: ]] 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.646 11:54:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.214 nvme0n1 00:27:39.214 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.214 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.214 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.214 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.214 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.214 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.214 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.214 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.214 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.214 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.473 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.042 nvme0n1 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: ]] 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.042 11:54:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.042 nvme0n1 00:27:40.042 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.042 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.042 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.042 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.042 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.042 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: ]] 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.301 nvme0n1 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.301 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.560 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.560 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.560 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:40.560 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.560 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.560 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.560 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:40.560 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:40.560 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:40.560 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.560 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.560 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:40.560 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: ]] 00:27:40.560 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:40.560 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:40.560 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.560 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.560 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.560 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.561 nvme0n1 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: ]] 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.561 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.820 nvme0n1 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.820 11:54:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.079 nvme0n1 00:27:41.079 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.079 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.079 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.079 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.079 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.079 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: ]] 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.080 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.339 nvme0n1 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: ]] 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:41.339 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.340 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:41.340 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.340 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.340 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.340 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.340 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.340 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.340 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.340 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.340 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.340 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.340 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.340 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.340 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.340 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.340 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:41.340 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.340 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.598 nvme0n1 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: ]] 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.598 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.857 nvme0n1 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: ]] 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.857 11:54:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.117 nvme0n1 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.117 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.376 nvme0n1 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: ]] 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.376 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.635 nvme0n1 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: ]] 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.635 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.894 nvme0n1 00:27:42.894 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.894 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.894 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.894 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.894 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.894 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.894 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.894 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.894 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.894 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.894 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.894 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.894 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:42.894 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.894 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:42.894 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:42.894 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:42.894 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:42.894 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:42.894 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:42.894 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:42.895 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:42.895 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: ]] 00:27:42.895 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:42.895 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:42.895 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.895 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:42.895 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:42.895 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:42.895 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.895 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:42.895 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.895 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.155 11:54:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.155 11:54:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.155 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.155 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.155 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.155 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.155 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.155 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.155 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.155 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.155 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.155 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.155 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.155 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.155 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.155 nvme0n1 00:27:43.155 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: ]] 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.414 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.415 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.415 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.415 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.415 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.415 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.415 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.415 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.415 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.415 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.415 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.415 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.415 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.415 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:43.415 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.415 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.674 nvme0n1 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.674 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.934 nvme0n1 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: ]] 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.934 11:54:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.502 nvme0n1 00:27:44.502 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.502 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.502 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.502 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.502 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.502 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.502 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.502 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.502 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.502 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.502 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.502 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: ]] 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.503 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.763 nvme0n1 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: ]] 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:44.763 11:54:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.341 nvme0n1 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: ]] 00:27:45.341 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.342 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.602 nvme0n1 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.602 11:54:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:45.860 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.860 11:54:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.133 nvme0n1 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: ]] 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.133 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.771 nvme0n1 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: ]] 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.771 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.772 11:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.772 11:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.772 11:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.772 11:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.772 11:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.772 11:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.772 11:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.772 11:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.772 11:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.772 11:54:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.772 11:54:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:46.772 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.772 11:54:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.340 nvme0n1 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: ]] 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.340 11:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.341 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.341 11:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.341 11:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.341 11:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.341 11:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.341 11:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.341 11:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.341 11:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.341 11:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.341 11:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.341 11:54:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.341 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:47.341 11:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.341 11:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.909 nvme0n1 00:27:47.909 11:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.909 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.909 11:54:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.909 11:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.909 11:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.909 11:54:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:47.909 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.909 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.909 11:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:47.909 11:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.168 11:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.168 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.168 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:48.168 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.168 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.168 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.168 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:48.168 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:48.168 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:48.168 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.168 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.168 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:48.168 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: ]] 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.169 11:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.736 nvme0n1 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.736 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:48.737 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.737 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:48.737 11:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.737 11:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.737 11:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.737 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.737 11:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.737 11:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.737 11:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.737 11:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.737 11:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.737 11:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.737 11:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.737 11:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.737 11:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.737 11:54:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.737 11:54:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:48.737 11:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.737 11:54:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.304 nvme0n1 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: ]] 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.304 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.563 nvme0n1 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: ]] 00:27:49.563 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.564 nvme0n1 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.564 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.823 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.823 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.823 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.823 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.823 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.823 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.823 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:49.823 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.823 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:49.823 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:49.823 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:49.823 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:49.823 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:49.823 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:49.823 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:49.823 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:49.823 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: ]] 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.824 nvme0n1 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:49.824 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:50.083 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:50.083 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:50.083 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.083 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:50.083 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:50.083 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: ]] 00:27:50.083 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:50.083 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:50.083 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.083 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.083 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:50.083 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:50.083 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.084 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:50.084 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.084 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.084 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.084 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.084 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.084 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.084 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.084 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.084 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.084 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.084 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.084 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.084 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.084 11:54:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.084 11:54:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:50.084 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.084 11:54:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.084 nvme0n1 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.084 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.343 nvme0n1 00:27:50.343 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.343 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.343 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.343 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.343 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.343 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.343 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.343 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.343 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.343 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.343 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.343 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: ]] 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.344 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.602 nvme0n1 00:27:50.602 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.602 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.602 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.602 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.602 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.602 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.602 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.602 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.602 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.602 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.602 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: ]] 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.603 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.862 nvme0n1 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: ]] 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:50.862 11:54:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.121 nvme0n1 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: ]] 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.121 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.381 nvme0n1 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.381 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.641 nvme0n1 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: ]] 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.641 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.901 nvme0n1 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: ]] 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.901 11:54:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.160 nvme0n1 00:27:52.160 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.160 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.160 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.160 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.160 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.160 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.160 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.160 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.160 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.160 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: ]] 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.418 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.677 nvme0n1 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: ]] 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:52.677 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:52.678 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.678 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:52.678 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.678 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.678 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.678 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.678 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.678 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.678 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.678 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.678 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.678 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.678 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.678 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.678 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.678 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.678 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:52.678 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.678 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.938 nvme0n1 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.938 11:54:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.198 nvme0n1 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: ]] 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.198 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.766 nvme0n1 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:53.766 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: ]] 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.767 11:54:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.026 nvme0n1 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: ]] 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.026 11:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.285 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:54.285 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.285 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.544 nvme0n1 00:27:54.544 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.544 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: ]] 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.545 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.114 nvme0n1 00:27:55.114 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.114 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.114 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.114 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.114 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.114 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.114 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.114 11:54:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.114 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.114 11:54:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.114 11:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.372 nvme0n1 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGY1MDk2ZjEyMzk5MTRmNzVlZTA1MzA3Y2ZkZmVjMTMf93W6: 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: ]] 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YThhNjYzNjkzNmNhMTZhOGFhYjBlMjY4Y2MyNzFjZTA2ZTAxZTE2N2IwNzc5NjFkYmNhNTA0N2UzOTc2OWNmNtcuy4o=: 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.372 11:54:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.938 nvme0n1 00:27:55.938 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.938 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.938 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.938 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.938 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.938 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: ]] 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.197 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.763 nvme0n1 00:27:56.763 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.763 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.763 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.763 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.763 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.763 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.763 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.763 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.763 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.763 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.763 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.763 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.763 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:56.763 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.763 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.763 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:N2YxMWZmNjdmMDcwY2QzMjY2ZTg3NzQ5NmMzZmI0M2GDbwT5: 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: ]] 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NDY2MjEwYWU1YzQxYTMwOWZkMzgxZjIyZDc3NTIzM2OAUDmO: 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:56.764 11:54:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.330 nvme0n1 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MTFkM2EyMjNhOTc1NjM0MGQ0NDAxMjJlMGFlZDZmNDY3N2NkY2Y2YTEwN2U0YWYzWP1ohA==: 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: ]] 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:M2UzMWZkYmZkMjYwNzAzM2M3YjExNTBjMTAyNTk0NTPqKkFn: 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.330 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.896 nvme0n1 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDRkZDdiMTU3N2I0YjA5YTJmM2QxZTc1NTk5Nzk5MjRhMjQ2MTJhZTc5MWJhZGI4MmVlOGI3ZDY4OGRjY2QxZmD3+ak=: 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:57.896 11:54:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.461 nvme0n1 00:27:58.461 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.461 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.461 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.461 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.461 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.461 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.461 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.461 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.461 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.461 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YjgwZmE0MmUwNWQwOWU1ODY5MzhmZjRiYWVmZDJlMTY1NWU4NmQ1YzMyNmUyN2I1EkXA3g==: 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: ]] 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Yjg0M2ExZmQ3NDQyYjZhYWM3MWM5MDA2MDA0N2UwMDI4YWMzZDkzZmI0OTMxYmNj9EXVFw==: 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.719 request: 00:27:58.719 { 00:27:58.719 "name": "nvme0", 00:27:58.719 "trtype": "tcp", 00:27:58.719 "traddr": "10.0.0.1", 00:27:58.719 "adrfam": "ipv4", 00:27:58.719 "trsvcid": "4420", 00:27:58.719 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:58.719 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:58.719 "prchk_reftag": false, 00:27:58.719 "prchk_guard": false, 00:27:58.719 "hdgst": false, 00:27:58.719 "ddgst": false, 00:27:58.719 "method": "bdev_nvme_attach_controller", 00:27:58.719 "req_id": 1 00:27:58.719 } 00:27:58.719 Got JSON-RPC error response 00:27:58.719 response: 00:27:58.719 { 00:27:58.719 "code": -5, 00:27:58.719 "message": "Input/output error" 00:27:58.719 } 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.719 request: 00:27:58.719 { 00:27:58.719 "name": "nvme0", 00:27:58.719 "trtype": "tcp", 00:27:58.719 "traddr": "10.0.0.1", 00:27:58.719 "adrfam": "ipv4", 00:27:58.719 "trsvcid": "4420", 00:27:58.719 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:58.719 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:58.719 "prchk_reftag": false, 00:27:58.719 "prchk_guard": false, 00:27:58.719 "hdgst": false, 00:27:58.719 "ddgst": false, 00:27:58.719 "dhchap_key": "key2", 00:27:58.719 "method": "bdev_nvme_attach_controller", 00:27:58.719 "req_id": 1 00:27:58.719 } 00:27:58.719 Got JSON-RPC error response 00:27:58.719 response: 00:27:58.719 { 00:27:58.719 "code": -5, 00:27:58.719 "message": "Input/output error" 00:27:58.719 } 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:58.719 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:58.720 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.720 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.720 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:58.720 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.720 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.720 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.978 request: 00:27:58.978 { 00:27:58.978 "name": "nvme0", 00:27:58.978 "trtype": "tcp", 00:27:58.978 "traddr": "10.0.0.1", 00:27:58.978 "adrfam": "ipv4", 00:27:58.978 "trsvcid": "4420", 00:27:58.978 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:58.978 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:58.978 "prchk_reftag": false, 00:27:58.978 "prchk_guard": false, 00:27:58.978 "hdgst": false, 00:27:58.978 "ddgst": false, 00:27:58.978 "dhchap_key": "key1", 00:27:58.978 "dhchap_ctrlr_key": "ckey2", 00:27:58.978 "method": "bdev_nvme_attach_controller", 00:27:58.978 "req_id": 1 00:27:58.978 } 00:27:58.978 Got JSON-RPC error response 00:27:58.978 response: 00:27:58.978 { 00:27:58.978 "code": -5, 00:27:58.978 "message": "Input/output error" 00:27:58.978 } 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:58.978 rmmod nvme_tcp 00:27:58.978 rmmod nvme_fabrics 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2107089 ']' 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2107089 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2107089 ']' 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2107089 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2107089 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2107089' 00:27:58.978 killing process with pid 2107089 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2107089 00:27:58.978 11:54:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2107089 00:27:59.237 11:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:59.237 11:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:59.237 11:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:59.237 11:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:59.237 11:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:59.237 11:54:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.237 11:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:59.237 11:54:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:01.141 11:54:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:01.141 11:54:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:01.141 11:54:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:01.141 11:54:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:01.141 11:54:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:01.141 11:54:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:01.400 11:54:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:01.400 11:54:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:01.400 11:54:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:01.400 11:54:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:01.400 11:54:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:01.400 11:54:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:01.400 11:54:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:04.689 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:04.689 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:04.689 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:04.689 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:04.689 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:04.689 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:04.689 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:04.689 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:04.689 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:04.689 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:04.689 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:04.689 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:04.689 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:04.689 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:04.689 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:04.689 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:06.094 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:28:06.094 11:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Fqb /tmp/spdk.key-null.pkB /tmp/spdk.key-sha256.Vnr /tmp/spdk.key-sha384.RHn /tmp/spdk.key-sha512.yF9 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:06.094 11:54:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:09.386 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:09.386 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:09.386 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:09.386 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:09.386 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:09.386 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:09.386 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:09.386 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:09.386 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:09.386 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:09.386 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:09.386 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:09.386 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:09.386 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:09.386 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:09.386 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:09.386 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:09.386 00:28:09.386 real 0m52.716s 00:28:09.386 user 0m45.480s 00:28:09.386 sys 0m14.549s 00:28:09.386 11:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:09.386 11:54:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.386 ************************************ 00:28:09.386 END TEST nvmf_auth_host 00:28:09.386 ************************************ 00:28:09.386 11:54:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:09.386 11:54:37 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:28:09.386 11:54:37 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:09.386 11:54:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:09.386 11:54:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:09.386 11:54:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:09.386 ************************************ 00:28:09.386 START TEST nvmf_digest 00:28:09.386 ************************************ 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:09.386 * Looking for test storage... 00:28:09.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.386 11:54:37 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:09.387 11:54:37 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:15.954 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:15.954 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:15.954 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:15.954 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:15.954 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:15.954 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:15.954 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:15.954 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:15.954 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:15.954 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:15.954 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:15.954 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:15.954 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:15.955 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:15.955 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:15.955 Found net devices under 0000:af:00.0: cvl_0_0 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:15.955 Found net devices under 0000:af:00.1: cvl_0_1 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:15.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:15.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:28:15.955 00:28:15.955 --- 10.0.0.2 ping statistics --- 00:28:15.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.955 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:15.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:15.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:28:15.955 00:28:15.955 --- 10.0.0.1 ping statistics --- 00:28:15.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.955 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:15.955 ************************************ 00:28:15.955 START TEST nvmf_digest_clean 00:28:15.955 ************************************ 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2121220 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2121220 00:28:15.955 11:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2121220 ']' 00:28:15.956 11:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.956 11:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:15.956 11:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.956 11:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:15.956 11:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:15.956 11:54:43 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:15.956 [2024-07-15 11:54:43.606379] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:28:15.956 [2024-07-15 11:54:43.606425] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.956 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.956 [2024-07-15 11:54:43.680350] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.956 [2024-07-15 11:54:43.751367] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:15.956 [2024-07-15 11:54:43.751404] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:15.956 [2024-07-15 11:54:43.751414] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:15.956 [2024-07-15 11:54:43.751422] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:15.956 [2024-07-15 11:54:43.751429] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:15.956 [2024-07-15 11:54:43.751456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:16.525 null0 00:28:16.525 [2024-07-15 11:54:44.519003] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.525 [2024-07-15 11:54:44.543196] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2121483 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2121483 /var/tmp/bperf.sock 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2121483 ']' 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:16.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:16.525 11:54:44 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:16.525 [2024-07-15 11:54:44.580381] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:28:16.525 [2024-07-15 11:54:44.580428] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121483 ] 00:28:16.525 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.784 [2024-07-15 11:54:44.647942] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.784 [2024-07-15 11:54:44.720300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.351 11:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:17.351 11:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:17.351 11:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:17.351 11:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:17.351 11:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:17.609 11:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.609 11:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.866 nvme0n1 00:28:17.866 11:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:17.866 11:54:45 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:18.125 Running I/O for 2 seconds... 00:28:20.024 00:28:20.024 Latency(us) 00:28:20.024 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.024 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:20.024 nvme0n1 : 2.00 27686.45 108.15 0.00 0.00 4618.16 2018.51 18769.51 00:28:20.024 =================================================================================================================== 00:28:20.024 Total : 27686.45 108.15 0.00 0.00 4618.16 2018.51 18769.51 00:28:20.024 0 00:28:20.024 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:20.024 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:20.024 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:20.024 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:20.024 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:20.024 | select(.opcode=="crc32c") 00:28:20.024 | "\(.module_name) \(.executed)"' 00:28:20.283 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:20.283 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:20.283 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:20.283 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:20.283 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2121483 00:28:20.283 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2121483 ']' 00:28:20.283 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2121483 00:28:20.283 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:20.283 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:20.283 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2121483 00:28:20.283 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:20.283 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:20.283 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2121483' 00:28:20.283 killing process with pid 2121483 00:28:20.283 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2121483 00:28:20.283 Received shutdown signal, test time was about 2.000000 seconds 00:28:20.283 00:28:20.283 Latency(us) 00:28:20.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.283 =================================================================================================================== 00:28:20.283 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:20.283 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2121483 00:28:20.543 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:20.543 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:20.543 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:20.543 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:20.543 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:20.543 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:20.543 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:20.543 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2122083 00:28:20.543 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2122083 /var/tmp/bperf.sock 00:28:20.543 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:20.543 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2122083 ']' 00:28:20.543 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:20.543 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:20.543 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:20.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:20.543 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:20.543 11:54:48 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:20.543 [2024-07-15 11:54:48.503092] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:28:20.543 [2024-07-15 11:54:48.503144] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2122083 ] 00:28:20.543 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:20.543 Zero copy mechanism will not be used. 00:28:20.543 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.543 [2024-07-15 11:54:48.574264] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.543 [2024-07-15 11:54:48.640122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.478 11:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:21.478 11:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:21.478 11:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:21.478 11:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:21.478 11:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:21.478 11:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:21.478 11:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:22.045 nvme0n1 00:28:22.045 11:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:22.045 11:54:49 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:22.045 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:22.045 Zero copy mechanism will not be used. 00:28:22.045 Running I/O for 2 seconds... 00:28:23.949 00:28:23.949 Latency(us) 00:28:23.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.949 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:23.949 nvme0n1 : 2.00 4352.13 544.02 0.00 0.00 3673.82 917.50 10433.33 00:28:23.949 =================================================================================================================== 00:28:23.949 Total : 4352.13 544.02 0.00 0.00 3673.82 917.50 10433.33 00:28:23.949 0 00:28:23.949 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:23.949 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:23.949 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:23.949 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:23.949 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:23.949 | select(.opcode=="crc32c") 00:28:23.949 | "\(.module_name) \(.executed)"' 00:28:24.208 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:24.208 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:24.208 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:24.208 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:24.208 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2122083 00:28:24.208 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2122083 ']' 00:28:24.208 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2122083 00:28:24.208 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:24.208 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:24.208 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2122083 00:28:24.208 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:24.208 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:24.208 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2122083' 00:28:24.208 killing process with pid 2122083 00:28:24.208 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2122083 00:28:24.208 Received shutdown signal, test time was about 2.000000 seconds 00:28:24.208 00:28:24.208 Latency(us) 00:28:24.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:24.208 =================================================================================================================== 00:28:24.208 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:24.208 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2122083 00:28:24.467 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:24.467 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:24.467 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:24.467 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:24.467 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:24.467 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:24.467 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:24.467 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2122837 00:28:24.467 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2122837 /var/tmp/bperf.sock 00:28:24.467 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2122837 ']' 00:28:24.467 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:24.467 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:24.467 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:24.467 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:24.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:24.467 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:24.467 11:54:52 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:24.467 [2024-07-15 11:54:52.469919] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:28:24.467 [2024-07-15 11:54:52.469974] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2122837 ] 00:28:24.467 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.467 [2024-07-15 11:54:52.539227] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.726 [2024-07-15 11:54:52.614197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.293 11:54:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:25.293 11:54:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:25.293 11:54:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:25.293 11:54:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:25.293 11:54:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:25.551 11:54:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.551 11:54:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:25.810 nvme0n1 00:28:25.810 11:54:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:25.810 11:54:53 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:26.068 Running I/O for 2 seconds... 00:28:27.972 00:28:27.972 Latency(us) 00:28:27.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:27.972 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:27.972 nvme0n1 : 2.00 29666.87 115.89 0.00 0.00 4309.39 3263.69 10800.33 00:28:27.972 =================================================================================================================== 00:28:27.972 Total : 29666.87 115.89 0.00 0.00 4309.39 3263.69 10800.33 00:28:27.972 0 00:28:27.972 11:54:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:27.972 11:54:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:27.972 11:54:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:27.972 11:54:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:27.972 11:54:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:27.972 | select(.opcode=="crc32c") 00:28:27.972 | "\(.module_name) \(.executed)"' 00:28:28.231 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:28.231 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:28.231 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:28.231 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:28.231 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2122837 00:28:28.231 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2122837 ']' 00:28:28.231 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2122837 00:28:28.231 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:28.231 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:28.231 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2122837 00:28:28.231 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:28.231 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:28.231 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2122837' 00:28:28.231 killing process with pid 2122837 00:28:28.231 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2122837 00:28:28.231 Received shutdown signal, test time was about 2.000000 seconds 00:28:28.231 00:28:28.231 Latency(us) 00:28:28.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.231 =================================================================================================================== 00:28:28.231 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:28.232 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2122837 00:28:28.491 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:28.491 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:28.491 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:28.491 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:28.491 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:28.491 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:28.491 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:28.491 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2123394 00:28:28.491 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2123394 /var/tmp/bperf.sock 00:28:28.491 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:28.491 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2123394 ']' 00:28:28.491 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:28.491 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:28.491 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:28.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:28.491 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:28.491 11:54:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:28.491 [2024-07-15 11:54:56.449861] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:28:28.491 [2024-07-15 11:54:56.449911] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2123394 ] 00:28:28.491 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:28.491 Zero copy mechanism will not be used. 00:28:28.491 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.491 [2024-07-15 11:54:56.518831] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.491 [2024-07-15 11:54:56.584138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.428 11:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:29.428 11:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:28:29.428 11:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:29.428 11:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:29.428 11:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:29.428 11:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:29.428 11:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:29.716 nvme0n1 00:28:29.716 11:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:29.716 11:54:57 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:29.716 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:29.716 Zero copy mechanism will not be used. 00:28:29.716 Running I/O for 2 seconds... 00:28:32.252 00:28:32.252 Latency(us) 00:28:32.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.252 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:32.252 nvme0n1 : 2.00 4586.16 573.27 0.00 0.00 3484.01 2202.01 20027.80 00:28:32.252 =================================================================================================================== 00:28:32.252 Total : 4586.16 573.27 0.00 0.00 3484.01 2202.01 20027.80 00:28:32.252 0 00:28:32.252 11:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:32.252 11:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:32.252 11:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:32.252 11:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:32.252 | select(.opcode=="crc32c") 00:28:32.252 | "\(.module_name) \(.executed)"' 00:28:32.252 11:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:32.252 11:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:32.252 11:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:32.252 11:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:32.252 11:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:32.252 11:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2123394 00:28:32.252 11:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2123394 ']' 00:28:32.252 11:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2123394 00:28:32.252 11:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:32.252 11:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:32.252 11:54:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2123394 00:28:32.252 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:32.252 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:32.252 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2123394' 00:28:32.252 killing process with pid 2123394 00:28:32.252 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2123394 00:28:32.252 Received shutdown signal, test time was about 2.000000 seconds 00:28:32.252 00:28:32.252 Latency(us) 00:28:32.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.252 =================================================================================================================== 00:28:32.252 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:32.252 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2123394 00:28:32.252 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2121220 00:28:32.252 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2121220 ']' 00:28:32.252 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2121220 00:28:32.252 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:28:32.252 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:32.252 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2121220 00:28:32.252 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:32.252 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:32.252 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2121220' 00:28:32.252 killing process with pid 2121220 00:28:32.252 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2121220 00:28:32.252 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2121220 00:28:32.511 00:28:32.511 real 0m16.863s 00:28:32.511 user 0m31.718s 00:28:32.511 sys 0m4.906s 00:28:32.511 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:32.511 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:32.511 ************************************ 00:28:32.511 END TEST nvmf_digest_clean 00:28:32.511 ************************************ 00:28:32.511 11:55:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:32.511 11:55:00 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:32.511 11:55:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:32.511 11:55:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:32.511 11:55:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:32.511 ************************************ 00:28:32.511 START TEST nvmf_digest_error 00:28:32.511 ************************************ 00:28:32.511 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:28:32.511 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:32.511 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:32.511 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:32.511 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:32.511 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2124197 00:28:32.511 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2124197 00:28:32.511 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2124197 ']' 00:28:32.511 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.511 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:32.511 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.511 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:32.511 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:32.511 11:55:00 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:32.511 [2024-07-15 11:55:00.550778] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:28:32.512 [2024-07-15 11:55:00.550826] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:32.512 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.771 [2024-07-15 11:55:00.624118] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.771 [2024-07-15 11:55:00.695356] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:32.771 [2024-07-15 11:55:00.695397] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:32.771 [2024-07-15 11:55:00.695406] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:32.771 [2024-07-15 11:55:00.695414] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:32.771 [2024-07-15 11:55:00.695421] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:32.771 [2024-07-15 11:55:00.695441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.340 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:33.340 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:33.340 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:33.340 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:33.340 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:33.340 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.340 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:33.340 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.340 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:33.340 [2024-07-15 11:55:01.365422] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:33.340 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.340 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:33.340 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:33.340 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.340 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:33.600 null0 00:28:33.600 [2024-07-15 11:55:01.453203] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.600 [2024-07-15 11:55:01.477392] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.600 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.600 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:33.600 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:33.600 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:33.600 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:33.600 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:33.600 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2124387 00:28:33.600 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2124387 /var/tmp/bperf.sock 00:28:33.600 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2124387 ']' 00:28:33.600 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:33.600 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:33.600 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:33.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:33.600 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:33.600 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:33.600 11:55:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:33.600 [2024-07-15 11:55:01.530558] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:28:33.600 [2024-07-15 11:55:01.530605] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2124387 ] 00:28:33.600 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.600 [2024-07-15 11:55:01.600209] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.600 [2024-07-15 11:55:01.672888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.535 11:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:34.536 11:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:34.536 11:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:34.536 11:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:34.536 11:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:34.536 11:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.536 11:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:34.536 11:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.536 11:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.536 11:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.794 nvme0n1 00:28:34.794 11:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:34.794 11:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.794 11:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:34.794 11:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.794 11:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:34.794 11:55:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:34.794 Running I/O for 2 seconds... 00:28:34.794 [2024-07-15 11:55:02.862425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:34.794 [2024-07-15 11:55:02.862460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.794 [2024-07-15 11:55:02.862480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.794 [2024-07-15 11:55:02.872573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:34.794 [2024-07-15 11:55:02.872600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.794 [2024-07-15 11:55:02.872611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.794 [2024-07-15 11:55:02.881048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:34.794 [2024-07-15 11:55:02.881072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:10990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.794 [2024-07-15 11:55:02.881084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.794 [2024-07-15 11:55:02.890100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:34.794 [2024-07-15 11:55:02.890124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:15930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.794 [2024-07-15 11:55:02.890135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.794 [2024-07-15 11:55:02.899150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:34.794 [2024-07-15 11:55:02.899174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.794 [2024-07-15 11:55:02.899185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.052 [2024-07-15 11:55:02.907545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:02.907568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:02.907580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:02.916945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:02.916967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:02.916977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:02.925499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:02.925522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:02.925532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:02.934928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:02.934951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:02.934962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:02.942929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:02.942955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:02.942966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:02.952620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:02.952643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:02.952653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:02.961267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:02.961289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:02.961300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:02.969566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:02.969589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:02.969599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:02.978828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:02.978857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:02.978867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:02.988096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:02.988118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:02.988128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:02.996824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:02.996852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:02.996863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:03.005797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:03.005819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:03.005829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:03.014479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:03.014502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:03.014512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:03.023002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:03.023025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:03.023035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:03.032305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:03.032328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:03.032340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:03.041537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:03.041559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:03.041570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:03.049831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:03.049859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:03.049870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:03.060895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:03.060917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:03.060927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:03.068916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:03.068937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:03.068948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:03.078603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:03.078625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:03.078636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:03.087551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:03.087574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:03.087584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:03.095491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:03.095513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:03.095526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:03.105809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:03.105839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:03.105850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:03.113626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:03.113648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:03.113661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:03.123051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:03.123076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:03.123087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:03.132238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:03.132262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:03.132273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:03.141542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:03.141564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:03.141575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.053 [2024-07-15 11:55:03.149333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.053 [2024-07-15 11:55:03.149355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.053 [2024-07-15 11:55:03.149366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.159361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.159385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.159395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.169057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.169079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.169090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.177555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.177577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.177588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.186535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.186558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.186568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.195316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.195338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.195348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.204249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.204271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.204281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.213182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.213205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.213216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.221688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.221711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.221721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.230398] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.230421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.230432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.238922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.238944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.238955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.248750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.248772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.248786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.257844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.257866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.257877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.265769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.265792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:4751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.265802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.275513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.275535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.275545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.284117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.284139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.284149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.292984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.293007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.293017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.301854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.301876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.301887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.311373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.311395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.311406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.319736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.319758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.319768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.328459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.328484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.328494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.338010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.338033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.338043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.346912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.346934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.346945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.354931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.354953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.354963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.363892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.363913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.363923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.374224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.374248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.374259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.383671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.383694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.383704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.392135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.392157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.392167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.400784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.400805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:7586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.400816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.410041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.410062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.410073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.321 [2024-07-15 11:55:03.419120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.321 [2024-07-15 11:55:03.419142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.321 [2024-07-15 11:55:03.419152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.428629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.428651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.428661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.438578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.438600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.438611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.447353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.447376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.447386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.456017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.456038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.456049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.465252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.465274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.465284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.473496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.473518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.473529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.483111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.483133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.483147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.491738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.491759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.491769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.499933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.499954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.499964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.509373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.509395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.509405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.518259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.518282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.518292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.527688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.527710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.527721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.535858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.535881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.535892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.545508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.545531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.545542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.555205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.555228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.555239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.564236] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.564260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.564271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.572988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.573011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.573021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.582587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.582609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.582620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.591165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.591188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.591198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.600751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.600774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.600784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.609909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.609930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.609940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.617928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.617950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.617960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.627028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.627050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.627061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.635823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.635850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.580 [2024-07-15 11:55:03.635864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.580 [2024-07-15 11:55:03.644315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.580 [2024-07-15 11:55:03.644337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.581 [2024-07-15 11:55:03.644347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.581 [2024-07-15 11:55:03.654176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.581 [2024-07-15 11:55:03.654198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.581 [2024-07-15 11:55:03.654208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.581 [2024-07-15 11:55:03.661864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.581 [2024-07-15 11:55:03.661886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.581 [2024-07-15 11:55:03.661896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.581 [2024-07-15 11:55:03.671550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.581 [2024-07-15 11:55:03.671572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.581 [2024-07-15 11:55:03.671582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.581 [2024-07-15 11:55:03.680418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.581 [2024-07-15 11:55:03.680441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.581 [2024-07-15 11:55:03.680452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.688342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.688365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.688376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.698021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.698044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.698055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.707087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.707109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.707119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.716417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.716441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.716452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.725147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.725168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.725179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.733797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.733819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.733830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.742957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.742979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.742990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.751039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.751061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.751071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.761367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.761390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.761400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.769967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.769989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.770000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.778816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.778844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:7092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.778855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.786464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.786486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.786497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.796066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.796087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.796097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.805426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.805448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.805459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.813595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.813616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.813627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.822906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.822929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.822939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.830984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.831006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.831016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.841008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.841031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.841041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.848987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.849009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.849020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.858511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.858534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.858544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.867105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.867127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.867141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.876039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.876061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.876073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.885265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.885288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.885299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.894218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.894240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.894251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.902286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.902308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.902319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.912224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.912246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.912256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.920664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.920686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.920696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.929408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.929430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.929440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.839 [2024-07-15 11:55:03.938323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:35.839 [2024-07-15 11:55:03.938345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.839 [2024-07-15 11:55:03.938355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.098 [2024-07-15 11:55:03.947520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.098 [2024-07-15 11:55:03.947542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.098 [2024-07-15 11:55:03.947553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.098 [2024-07-15 11:55:03.956006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.098 [2024-07-15 11:55:03.956028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:24820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.098 [2024-07-15 11:55:03.956038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.098 [2024-07-15 11:55:03.964229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.098 [2024-07-15 11:55:03.964250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.098 [2024-07-15 11:55:03.964260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.098 [2024-07-15 11:55:03.974110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.098 [2024-07-15 11:55:03.974132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:23812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.098 [2024-07-15 11:55:03.974143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.098 [2024-07-15 11:55:03.983039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.098 [2024-07-15 11:55:03.983061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.098 [2024-07-15 11:55:03.983071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.098 [2024-07-15 11:55:03.991110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.098 [2024-07-15 11:55:03.991132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.098 [2024-07-15 11:55:03.991142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.098 [2024-07-15 11:55:03.999734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.098 [2024-07-15 11:55:03.999757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:24409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.098 [2024-07-15 11:55:03.999767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.098 [2024-07-15 11:55:04.009240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.098 [2024-07-15 11:55:04.009262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.098 [2024-07-15 11:55:04.009273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.098 [2024-07-15 11:55:04.018395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.098 [2024-07-15 11:55:04.018417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.098 [2024-07-15 11:55:04.018432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.098 [2024-07-15 11:55:04.027014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.098 [2024-07-15 11:55:04.027036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.098 [2024-07-15 11:55:04.027046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.098 [2024-07-15 11:55:04.036815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.098 [2024-07-15 11:55:04.036848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.099 [2024-07-15 11:55:04.036860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.099 [2024-07-15 11:55:04.044520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.099 [2024-07-15 11:55:04.044541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.099 [2024-07-15 11:55:04.044552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.099 [2024-07-15 11:55:04.053705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.099 [2024-07-15 11:55:04.053727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.099 [2024-07-15 11:55:04.053737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.099 [2024-07-15 11:55:04.063060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.099 [2024-07-15 11:55:04.063082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.099 [2024-07-15 11:55:04.063092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.099 [2024-07-15 11:55:04.071355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.099 [2024-07-15 11:55:04.071377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.099 [2024-07-15 11:55:04.071387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.099 [2024-07-15 11:55:04.080297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.099 [2024-07-15 11:55:04.080318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.099 [2024-07-15 11:55:04.080329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.099 [2024-07-15 11:55:04.089332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.099 [2024-07-15 11:55:04.089354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.099 [2024-07-15 11:55:04.089364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.099 [2024-07-15 11:55:04.098334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.099 [2024-07-15 11:55:04.098359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.099 [2024-07-15 11:55:04.098370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.099 [2024-07-15 11:55:04.106171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.099 [2024-07-15 11:55:04.106193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.099 [2024-07-15 11:55:04.106204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.099 [2024-07-15 11:55:04.116276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.099 [2024-07-15 11:55:04.116297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.099 [2024-07-15 11:55:04.116308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.099 [2024-07-15 11:55:04.124532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.099 [2024-07-15 11:55:04.124554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.099 [2024-07-15 11:55:04.124564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.099 [2024-07-15 11:55:04.133309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.099 [2024-07-15 11:55:04.133332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.099 [2024-07-15 11:55:04.133343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.099 [2024-07-15 11:55:04.142764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.099 [2024-07-15 11:55:04.142788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:18463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.099 [2024-07-15 11:55:04.142800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.099 [2024-07-15 11:55:04.150534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.099 [2024-07-15 11:55:04.150556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.099 [2024-07-15 11:55:04.150567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.099 [2024-07-15 11:55:04.160234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.099 [2024-07-15 11:55:04.160255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.099 [2024-07-15 11:55:04.160266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.099 [2024-07-15 11:55:04.169065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.099 [2024-07-15 11:55:04.169087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.099 [2024-07-15 11:55:04.169097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.099 [2024-07-15 11:55:04.177751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.099 [2024-07-15 11:55:04.177773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.099 [2024-07-15 11:55:04.177784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.099 [2024-07-15 11:55:04.187176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.099 [2024-07-15 11:55:04.187198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.099 [2024-07-15 11:55:04.187208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.099 [2024-07-15 11:55:04.196183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.099 [2024-07-15 11:55:04.196204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.099 [2024-07-15 11:55:04.196215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.358 [2024-07-15 11:55:04.203868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.358 [2024-07-15 11:55:04.203891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-07-15 11:55:04.203901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.358 [2024-07-15 11:55:04.214729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.358 [2024-07-15 11:55:04.214750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-07-15 11:55:04.214761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.358 [2024-07-15 11:55:04.222407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.358 [2024-07-15 11:55:04.222428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-07-15 11:55:04.222439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.358 [2024-07-15 11:55:04.231989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.358 [2024-07-15 11:55:04.232011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-07-15 11:55:04.232022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.358 [2024-07-15 11:55:04.240180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.358 [2024-07-15 11:55:04.240202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-07-15 11:55:04.240213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.358 [2024-07-15 11:55:04.249799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.358 [2024-07-15 11:55:04.249821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-07-15 11:55:04.249842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.358 [2024-07-15 11:55:04.257900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.358 [2024-07-15 11:55:04.257922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.358 [2024-07-15 11:55:04.257933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.267313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.267334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.267345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.276068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.276090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.276101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.284824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.284852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.284862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.293503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.293525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.293536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.302463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.302487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.302497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.312127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.312150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.312160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.320249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.320271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.320281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.329404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.329431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.329442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.338710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.338734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.338745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.346783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.346806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.346817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.356025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.356048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.356058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.364293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.364316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.364327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.373029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.373052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.373063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.382115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.382137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.382148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.391324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.391347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.391357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.399170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.399193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.399203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.409267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.409290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.409301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.416708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.416730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.416741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.427248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.427271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.427282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.435867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.435889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.435900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.444830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.444858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.444868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.453074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.453097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.453108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.359 [2024-07-15 11:55:04.462384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.359 [2024-07-15 11:55:04.462407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.359 [2024-07-15 11:55:04.462417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.618 [2024-07-15 11:55:04.471465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.618 [2024-07-15 11:55:04.471488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.618 [2024-07-15 11:55:04.471498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.618 [2024-07-15 11:55:04.480959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.618 [2024-07-15 11:55:04.480986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.618 [2024-07-15 11:55:04.480996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.618 [2024-07-15 11:55:04.488795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.618 [2024-07-15 11:55:04.488817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.618 [2024-07-15 11:55:04.488828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.618 [2024-07-15 11:55:04.498414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.618 [2024-07-15 11:55:04.498435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.618 [2024-07-15 11:55:04.498445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.618 [2024-07-15 11:55:04.507277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.618 [2024-07-15 11:55:04.507299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:10096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.618 [2024-07-15 11:55:04.507309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.515480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.515502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.515513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.525476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.525499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.525509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.533576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.533599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.533610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.542132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.542154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.542165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.551140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.551163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.551174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.559751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.559773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.559783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.568909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.568931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.568941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.577758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.577781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.577792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.586126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.586149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.586159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.595195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.595218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:18118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.595228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.603358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.603380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.603390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.612532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.612554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.612565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.622334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.622357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.622367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.631207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.631231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.631246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.640449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.640473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.640483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.648549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.648571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.648581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.658066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.658088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.658099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.666531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.666553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.666564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.675276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.675299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.675309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.684706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.684728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.684738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.693395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.693417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.693428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.701913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.701937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.701947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.711156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.711183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.711194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.619 [2024-07-15 11:55:04.720011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.619 [2024-07-15 11:55:04.720034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.619 [2024-07-15 11:55:04.720045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.878 [2024-07-15 11:55:04.729151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.878 [2024-07-15 11:55:04.729173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.878 [2024-07-15 11:55:04.729184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.878 [2024-07-15 11:55:04.738073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.878 [2024-07-15 11:55:04.738095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.878 [2024-07-15 11:55:04.738106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.878 [2024-07-15 11:55:04.747275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.878 [2024-07-15 11:55:04.747297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.878 [2024-07-15 11:55:04.747308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.878 [2024-07-15 11:55:04.755521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.878 [2024-07-15 11:55:04.755543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.878 [2024-07-15 11:55:04.755554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.878 [2024-07-15 11:55:04.765116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.878 [2024-07-15 11:55:04.765138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:5669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.878 [2024-07-15 11:55:04.765148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.878 [2024-07-15 11:55:04.773202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.878 [2024-07-15 11:55:04.773224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.878 [2024-07-15 11:55:04.773235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.878 [2024-07-15 11:55:04.781415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.878 [2024-07-15 11:55:04.781437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.878 [2024-07-15 11:55:04.781448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.878 [2024-07-15 11:55:04.790687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.878 [2024-07-15 11:55:04.790709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.879 [2024-07-15 11:55:04.790720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.879 [2024-07-15 11:55:04.799699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.879 [2024-07-15 11:55:04.799721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.879 [2024-07-15 11:55:04.799732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.879 [2024-07-15 11:55:04.808724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.879 [2024-07-15 11:55:04.808747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.879 [2024-07-15 11:55:04.808757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.879 [2024-07-15 11:55:04.817228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.879 [2024-07-15 11:55:04.817250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.879 [2024-07-15 11:55:04.817261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.879 [2024-07-15 11:55:04.825736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.879 [2024-07-15 11:55:04.825759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.879 [2024-07-15 11:55:04.825769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.879 [2024-07-15 11:55:04.834915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.879 [2024-07-15 11:55:04.834937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.879 [2024-07-15 11:55:04.834948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.879 [2024-07-15 11:55:04.844241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.879 [2024-07-15 11:55:04.844265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.879 [2024-07-15 11:55:04.844275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.879 [2024-07-15 11:55:04.851634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x235d270) 00:28:36.879 [2024-07-15 11:55:04.851657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.879 [2024-07-15 11:55:04.851668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.879 00:28:36.879 Latency(us) 00:28:36.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.879 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:36.879 nvme0n1 : 2.00 28536.47 111.47 0.00 0.00 4480.15 2110.26 11901.34 00:28:36.879 =================================================================================================================== 00:28:36.879 Total : 28536.47 111.47 0.00 0.00 4480.15 2110.26 11901.34 00:28:36.879 0 00:28:36.879 11:55:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:36.879 11:55:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:36.879 11:55:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:36.879 | .driver_specific 00:28:36.879 | .nvme_error 00:28:36.879 | .status_code 00:28:36.879 | .command_transient_transport_error' 00:28:36.879 11:55:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:37.138 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 224 > 0 )) 00:28:37.138 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2124387 00:28:37.138 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2124387 ']' 00:28:37.138 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2124387 00:28:37.138 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:37.138 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:37.138 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2124387 00:28:37.138 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:37.138 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:37.138 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2124387' 00:28:37.138 killing process with pid 2124387 00:28:37.138 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2124387 00:28:37.138 Received shutdown signal, test time was about 2.000000 seconds 00:28:37.138 00:28:37.138 Latency(us) 00:28:37.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.138 =================================================================================================================== 00:28:37.138 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:37.138 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2124387 00:28:37.397 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:37.397 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:37.397 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:37.397 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:37.397 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:37.397 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2125021 00:28:37.397 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2125021 /var/tmp/bperf.sock 00:28:37.397 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:37.397 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2125021 ']' 00:28:37.397 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:37.397 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:37.397 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:37.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:37.397 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:37.397 11:55:05 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:37.397 [2024-07-15 11:55:05.330957] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:28:37.397 [2024-07-15 11:55:05.331006] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2125021 ] 00:28:37.397 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:37.397 Zero copy mechanism will not be used. 00:28:37.397 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.397 [2024-07-15 11:55:05.400792] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.397 [2024-07-15 11:55:05.463549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.333 11:55:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:38.333 11:55:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:38.333 11:55:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:38.333 11:55:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:38.333 11:55:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:38.333 11:55:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.333 11:55:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:38.333 11:55:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.333 11:55:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:38.333 11:55:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:38.592 nvme0n1 00:28:38.592 11:55:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:38.592 11:55:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.592 11:55:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:38.592 11:55:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.592 11:55:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:38.592 11:55:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:38.592 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:38.592 Zero copy mechanism will not be used. 00:28:38.592 Running I/O for 2 seconds... 00:28:38.853 [2024-07-15 11:55:06.708847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.853 [2024-07-15 11:55:06.708887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.853 [2024-07-15 11:55:06.708900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.853 [2024-07-15 11:55:06.719994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.853 [2024-07-15 11:55:06.720026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.853 [2024-07-15 11:55:06.720038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.853 [2024-07-15 11:55:06.730633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.853 [2024-07-15 11:55:06.730660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.853 [2024-07-15 11:55:06.730672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.853 [2024-07-15 11:55:06.740986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.853 [2024-07-15 11:55:06.741011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.741022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.751889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.751914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.751926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.762096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.762123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.762135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.772291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.772314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.772326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.782588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.782613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.782624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.793143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.793166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.793176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.803284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.803306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.803317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.813490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.813513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.813524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.822763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.822784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.822795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.831302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.831324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.831335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.838892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.838916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.838927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.847121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.847146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.847158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.854531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.854554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.854565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.860838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.860861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.860872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.867180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.867203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.867213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.873738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.873761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.873775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.880293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.880316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.880326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.886789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.886812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.886822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.893207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.893230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.893241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.899650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.899673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.899684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.906073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.906096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.906107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.912476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.912499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.912509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.918850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.918873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.918883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.925302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.925325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.925336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.931396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.931420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.931430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.938252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.938276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.938288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.944892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.854 [2024-07-15 11:55:06.944916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.854 [2024-07-15 11:55:06.944928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.854 [2024-07-15 11:55:06.951188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.855 [2024-07-15 11:55:06.951211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.855 [2024-07-15 11:55:06.951222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.855 [2024-07-15 11:55:06.957173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:38.855 [2024-07-15 11:55:06.957197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.855 [2024-07-15 11:55:06.957207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.115 [2024-07-15 11:55:06.961046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.115 [2024-07-15 11:55:06.961069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.115 [2024-07-15 11:55:06.961080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.115 [2024-07-15 11:55:06.967499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.115 [2024-07-15 11:55:06.967521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.115 [2024-07-15 11:55:06.967532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.115 [2024-07-15 11:55:06.973878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.115 [2024-07-15 11:55:06.973900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.115 [2024-07-15 11:55:06.973911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.115 [2024-07-15 11:55:06.980237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.115 [2024-07-15 11:55:06.980259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.115 [2024-07-15 11:55:06.980272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.115 [2024-07-15 11:55:06.986593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.115 [2024-07-15 11:55:06.986614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.115 [2024-07-15 11:55:06.986625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.115 [2024-07-15 11:55:06.992925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.115 [2024-07-15 11:55:06.992947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.115 [2024-07-15 11:55:06.992958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.115 [2024-07-15 11:55:06.999321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.115 [2024-07-15 11:55:06.999343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.115 [2024-07-15 11:55:06.999354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.115 [2024-07-15 11:55:07.005673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.115 [2024-07-15 11:55:07.005694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.115 [2024-07-15 11:55:07.005704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.115 [2024-07-15 11:55:07.012009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.115 [2024-07-15 11:55:07.012031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.115 [2024-07-15 11:55:07.012042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.115 [2024-07-15 11:55:07.018367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.115 [2024-07-15 11:55:07.018389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.115 [2024-07-15 11:55:07.018400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.115 [2024-07-15 11:55:07.024741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.115 [2024-07-15 11:55:07.024762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.115 [2024-07-15 11:55:07.024773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.115 [2024-07-15 11:55:07.031103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.115 [2024-07-15 11:55:07.031125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.115 [2024-07-15 11:55:07.031136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.115 [2024-07-15 11:55:07.037418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.115 [2024-07-15 11:55:07.037444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.115 [2024-07-15 11:55:07.037455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.115 [2024-07-15 11:55:07.043767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.115 [2024-07-15 11:55:07.043789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.115 [2024-07-15 11:55:07.043800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.115 [2024-07-15 11:55:07.050231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.115 [2024-07-15 11:55:07.050253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.115 [2024-07-15 11:55:07.050264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.056623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.056647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.056658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.063012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.063034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.063044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.069356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.069379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.069389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.075752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.075774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.075786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.082106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.082128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.082140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.088462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.088484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.088495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.094821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.094848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.094859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.101242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.101265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.101276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.107667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.107690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.107702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.114033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.114055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.114066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.120377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.120399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.120409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.126812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.126839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.126850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.133100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.133122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.133132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.139472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.139495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.139505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.145846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.145867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.145881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.152180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.152202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.152213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.158569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.158590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.158601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.164965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.164987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.164998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.171385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.171407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.171417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.177764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.177786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.177796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.184124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.184146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.184157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.190531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.190553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.190563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.196896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.196917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.196929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.203240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.203265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.203276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.209623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.209645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.209656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.116 [2024-07-15 11:55:07.215996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.116 [2024-07-15 11:55:07.216018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.116 [2024-07-15 11:55:07.216029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.377 [2024-07-15 11:55:07.222375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.377 [2024-07-15 11:55:07.222397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.377 [2024-07-15 11:55:07.222408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.377 [2024-07-15 11:55:07.228806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.377 [2024-07-15 11:55:07.228828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.377 [2024-07-15 11:55:07.228843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.377 [2024-07-15 11:55:07.235183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.377 [2024-07-15 11:55:07.235205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.377 [2024-07-15 11:55:07.235216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.377 [2024-07-15 11:55:07.241518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.377 [2024-07-15 11:55:07.241540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.377 [2024-07-15 11:55:07.241551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.377 [2024-07-15 11:55:07.247919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.377 [2024-07-15 11:55:07.247941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.377 [2024-07-15 11:55:07.247952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.377 [2024-07-15 11:55:07.254313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.377 [2024-07-15 11:55:07.254335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.377 [2024-07-15 11:55:07.254346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.377 [2024-07-15 11:55:07.260690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.377 [2024-07-15 11:55:07.260712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.377 [2024-07-15 11:55:07.260723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.377 [2024-07-15 11:55:07.267044] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.377 [2024-07-15 11:55:07.267066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.377 [2024-07-15 11:55:07.267077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.377 [2024-07-15 11:55:07.273424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.377 [2024-07-15 11:55:07.273447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.377 [2024-07-15 11:55:07.273457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.377 [2024-07-15 11:55:07.280309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.377 [2024-07-15 11:55:07.280332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.377 [2024-07-15 11:55:07.280342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.377 [2024-07-15 11:55:07.286939] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.377 [2024-07-15 11:55:07.286961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.377 [2024-07-15 11:55:07.286971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.377 [2024-07-15 11:55:07.294452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.377 [2024-07-15 11:55:07.294474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.377 [2024-07-15 11:55:07.294484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.377 [2024-07-15 11:55:07.301460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.377 [2024-07-15 11:55:07.301482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.377 [2024-07-15 11:55:07.301493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.377 [2024-07-15 11:55:07.308454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.377 [2024-07-15 11:55:07.308475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.377 [2024-07-15 11:55:07.308486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.377 [2024-07-15 11:55:07.314646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.377 [2024-07-15 11:55:07.314669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.377 [2024-07-15 11:55:07.314684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.377 [2024-07-15 11:55:07.321788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.377 [2024-07-15 11:55:07.321810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.377 [2024-07-15 11:55:07.321820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.378 [2024-07-15 11:55:07.335109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.378 [2024-07-15 11:55:07.335131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.378 [2024-07-15 11:55:07.335141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.378 [2024-07-15 11:55:07.345913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.378 [2024-07-15 11:55:07.345935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.378 [2024-07-15 11:55:07.345947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.378 [2024-07-15 11:55:07.357017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.378 [2024-07-15 11:55:07.357040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.378 [2024-07-15 11:55:07.357052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.378 [2024-07-15 11:55:07.369165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.378 [2024-07-15 11:55:07.369188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.378 [2024-07-15 11:55:07.369200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.378 [2024-07-15 11:55:07.381542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.378 [2024-07-15 11:55:07.381564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.378 [2024-07-15 11:55:07.381575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.378 [2024-07-15 11:55:07.391151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.378 [2024-07-15 11:55:07.391174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.378 [2024-07-15 11:55:07.391186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.378 [2024-07-15 11:55:07.400567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.378 [2024-07-15 11:55:07.400590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.378 [2024-07-15 11:55:07.400601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.378 [2024-07-15 11:55:07.408177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.378 [2024-07-15 11:55:07.408202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.378 [2024-07-15 11:55:07.408213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.378 [2024-07-15 11:55:07.420513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.378 [2024-07-15 11:55:07.420535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.378 [2024-07-15 11:55:07.420547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.378 [2024-07-15 11:55:07.432612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.378 [2024-07-15 11:55:07.432634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.378 [2024-07-15 11:55:07.432645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.378 [2024-07-15 11:55:07.441589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.378 [2024-07-15 11:55:07.441611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.378 [2024-07-15 11:55:07.441622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.378 [2024-07-15 11:55:07.449437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.378 [2024-07-15 11:55:07.449459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.378 [2024-07-15 11:55:07.449470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.378 [2024-07-15 11:55:07.458979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.378 [2024-07-15 11:55:07.459001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.378 [2024-07-15 11:55:07.459012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.378 [2024-07-15 11:55:07.468836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.378 [2024-07-15 11:55:07.468859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.378 [2024-07-15 11:55:07.468870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.378 [2024-07-15 11:55:07.477550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.378 [2024-07-15 11:55:07.477574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.378 [2024-07-15 11:55:07.477586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.485240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.485264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.485276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.491877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.491900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.491911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.498387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.498409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.498420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.504839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.504863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.504874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.511331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.511354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.511364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.517786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.517808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.517819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.524293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.524316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.524327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.530693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.530717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.530727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.537143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.537165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.537176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.543534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.543560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.543571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.549960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.549982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.549993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.556252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.556275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.556286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.562722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.562744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.562755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.569048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.569070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.569081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.575365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.575387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.575398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.581690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.581713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.581723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.588023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.588046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.588057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.594389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.594411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.594422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.600777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.600799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.600810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.607137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.607159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.607171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.613526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.613548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.613559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.619902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.619924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.619934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.626249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.626271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.626282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.632639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.632661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.632672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.638978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.639001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.639012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.639 [2024-07-15 11:55:07.645302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.639 [2024-07-15 11:55:07.645325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.639 [2024-07-15 11:55:07.645336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.640 [2024-07-15 11:55:07.651631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.640 [2024-07-15 11:55:07.651653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.640 [2024-07-15 11:55:07.651668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.640 [2024-07-15 11:55:07.658013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.640 [2024-07-15 11:55:07.658035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.640 [2024-07-15 11:55:07.658047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.640 [2024-07-15 11:55:07.664363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.640 [2024-07-15 11:55:07.664386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.640 [2024-07-15 11:55:07.664397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.640 [2024-07-15 11:55:07.670701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.640 [2024-07-15 11:55:07.670723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.640 [2024-07-15 11:55:07.670734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.640 [2024-07-15 11:55:07.677082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.640 [2024-07-15 11:55:07.677105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.640 [2024-07-15 11:55:07.677116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.640 [2024-07-15 11:55:07.683399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.640 [2024-07-15 11:55:07.683421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.640 [2024-07-15 11:55:07.683433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.640 [2024-07-15 11:55:07.689764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.640 [2024-07-15 11:55:07.689786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.640 [2024-07-15 11:55:07.689797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.640 [2024-07-15 11:55:07.696114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.640 [2024-07-15 11:55:07.696136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.640 [2024-07-15 11:55:07.696147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.640 [2024-07-15 11:55:07.702458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.640 [2024-07-15 11:55:07.702480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.640 [2024-07-15 11:55:07.702491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.640 [2024-07-15 11:55:07.708807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.640 [2024-07-15 11:55:07.708838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.640 [2024-07-15 11:55:07.708850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.640 [2024-07-15 11:55:07.716089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.640 [2024-07-15 11:55:07.716112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.640 [2024-07-15 11:55:07.716123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.640 [2024-07-15 11:55:07.723879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.640 [2024-07-15 11:55:07.723902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.640 [2024-07-15 11:55:07.723913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.640 [2024-07-15 11:55:07.732331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.640 [2024-07-15 11:55:07.732355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.640 [2024-07-15 11:55:07.732366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.640 [2024-07-15 11:55:07.740119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.640 [2024-07-15 11:55:07.740143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.640 [2024-07-15 11:55:07.740155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.901 [2024-07-15 11:55:07.748306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.901 [2024-07-15 11:55:07.748331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.901 [2024-07-15 11:55:07.748343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.901 [2024-07-15 11:55:07.756584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.901 [2024-07-15 11:55:07.756608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.901 [2024-07-15 11:55:07.756619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.901 [2024-07-15 11:55:07.765532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.901 [2024-07-15 11:55:07.765556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.901 [2024-07-15 11:55:07.765568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.901 [2024-07-15 11:55:07.773588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.901 [2024-07-15 11:55:07.773612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.901 [2024-07-15 11:55:07.773623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.901 [2024-07-15 11:55:07.782652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.901 [2024-07-15 11:55:07.782676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.901 [2024-07-15 11:55:07.782688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.901 [2024-07-15 11:55:07.791622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.901 [2024-07-15 11:55:07.791646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.901 [2024-07-15 11:55:07.791657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.901 [2024-07-15 11:55:07.799361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.901 [2024-07-15 11:55:07.799385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.901 [2024-07-15 11:55:07.799396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.901 [2024-07-15 11:55:07.807088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.901 [2024-07-15 11:55:07.807113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.901 [2024-07-15 11:55:07.807124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.901 [2024-07-15 11:55:07.815315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.901 [2024-07-15 11:55:07.815338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.901 [2024-07-15 11:55:07.815349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.901 [2024-07-15 11:55:07.824494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.901 [2024-07-15 11:55:07.824517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.901 [2024-07-15 11:55:07.824528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.901 [2024-07-15 11:55:07.833779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.901 [2024-07-15 11:55:07.833803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.901 [2024-07-15 11:55:07.833815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.901 [2024-07-15 11:55:07.843455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.901 [2024-07-15 11:55:07.843477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.901 [2024-07-15 11:55:07.843489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.901 [2024-07-15 11:55:07.852341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.901 [2024-07-15 11:55:07.852364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.901 [2024-07-15 11:55:07.852379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.901 [2024-07-15 11:55:07.861039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.901 [2024-07-15 11:55:07.861062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.901 [2024-07-15 11:55:07.861073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.901 [2024-07-15 11:55:07.869461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.901 [2024-07-15 11:55:07.869485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.901 [2024-07-15 11:55:07.869496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.901 [2024-07-15 11:55:07.876923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.901 [2024-07-15 11:55:07.876946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.901 [2024-07-15 11:55:07.876957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.901 [2024-07-15 11:55:07.885482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.901 [2024-07-15 11:55:07.885506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.901 [2024-07-15 11:55:07.885517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.901 [2024-07-15 11:55:07.894195] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.901 [2024-07-15 11:55:07.894218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.901 [2024-07-15 11:55:07.894230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.901 [2024-07-15 11:55:07.902583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.901 [2024-07-15 11:55:07.902607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.901 [2024-07-15 11:55:07.902618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.901 [2024-07-15 11:55:07.910308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.902 [2024-07-15 11:55:07.910332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.902 [2024-07-15 11:55:07.910342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.902 [2024-07-15 11:55:07.917377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.902 [2024-07-15 11:55:07.917399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.902 [2024-07-15 11:55:07.917410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.902 [2024-07-15 11:55:07.924104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.902 [2024-07-15 11:55:07.924130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.902 [2024-07-15 11:55:07.924141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.902 [2024-07-15 11:55:07.930608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.902 [2024-07-15 11:55:07.930631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.902 [2024-07-15 11:55:07.930642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.902 [2024-07-15 11:55:07.937364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.902 [2024-07-15 11:55:07.937388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.902 [2024-07-15 11:55:07.937399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.902 [2024-07-15 11:55:07.944664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.902 [2024-07-15 11:55:07.944687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.902 [2024-07-15 11:55:07.944698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.902 [2024-07-15 11:55:07.951846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.902 [2024-07-15 11:55:07.951869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.902 [2024-07-15 11:55:07.951880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.902 [2024-07-15 11:55:07.959882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.902 [2024-07-15 11:55:07.959906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.902 [2024-07-15 11:55:07.959917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.902 [2024-07-15 11:55:07.967062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.902 [2024-07-15 11:55:07.967085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.902 [2024-07-15 11:55:07.967096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.902 [2024-07-15 11:55:07.973748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.902 [2024-07-15 11:55:07.973771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.902 [2024-07-15 11:55:07.973782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:39.902 [2024-07-15 11:55:07.979868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.902 [2024-07-15 11:55:07.979891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.902 [2024-07-15 11:55:07.979903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:39.902 [2024-07-15 11:55:07.987190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.902 [2024-07-15 11:55:07.987214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.902 [2024-07-15 11:55:07.987225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:39.902 [2024-07-15 11:55:07.993702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.902 [2024-07-15 11:55:07.993725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.902 [2024-07-15 11:55:07.993735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:39.902 [2024-07-15 11:55:08.000177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:39.902 [2024-07-15 11:55:08.000201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:39.902 [2024-07-15 11:55:08.000212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.162 [2024-07-15 11:55:08.006733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.162 [2024-07-15 11:55:08.006758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.162 [2024-07-15 11:55:08.006768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.162 [2024-07-15 11:55:08.012640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.162 [2024-07-15 11:55:08.012664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.162 [2024-07-15 11:55:08.012674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.162 [2024-07-15 11:55:08.018988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.162 [2024-07-15 11:55:08.019012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.162 [2024-07-15 11:55:08.019023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.162 [2024-07-15 11:55:08.025502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.162 [2024-07-15 11:55:08.025526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.162 [2024-07-15 11:55:08.025536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.162 [2024-07-15 11:55:08.032022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.162 [2024-07-15 11:55:08.032045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.162 [2024-07-15 11:55:08.032056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.162 [2024-07-15 11:55:08.038556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.162 [2024-07-15 11:55:08.038580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.162 [2024-07-15 11:55:08.038594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.162 [2024-07-15 11:55:08.045081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.162 [2024-07-15 11:55:08.045104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.162 [2024-07-15 11:55:08.045115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.162 [2024-07-15 11:55:08.051685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.162 [2024-07-15 11:55:08.051708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.162 [2024-07-15 11:55:08.051720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.162 [2024-07-15 11:55:08.058099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.162 [2024-07-15 11:55:08.058122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.163 [2024-07-15 11:55:08.058132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.163 [2024-07-15 11:55:08.068463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.163 [2024-07-15 11:55:08.068486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.163 [2024-07-15 11:55:08.068497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.163 [2024-07-15 11:55:08.081071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.163 [2024-07-15 11:55:08.081094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.163 [2024-07-15 11:55:08.081105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.163 [2024-07-15 11:55:08.090882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.163 [2024-07-15 11:55:08.090905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.163 [2024-07-15 11:55:08.090915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.163 [2024-07-15 11:55:08.099586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.163 [2024-07-15 11:55:08.099609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.163 [2024-07-15 11:55:08.099620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.163 [2024-07-15 11:55:08.107176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.163 [2024-07-15 11:55:08.107199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.163 [2024-07-15 11:55:08.107210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.163 [2024-07-15 11:55:08.118345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.163 [2024-07-15 11:55:08.118368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.163 [2024-07-15 11:55:08.118379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.163 [2024-07-15 11:55:08.130270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.163 [2024-07-15 11:55:08.130293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.163 [2024-07-15 11:55:08.130304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.163 [2024-07-15 11:55:08.140945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.163 [2024-07-15 11:55:08.140969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.163 [2024-07-15 11:55:08.140981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.163 [2024-07-15 11:55:08.150366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.163 [2024-07-15 11:55:08.150389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.163 [2024-07-15 11:55:08.150399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.163 [2024-07-15 11:55:08.160099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.163 [2024-07-15 11:55:08.160123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.163 [2024-07-15 11:55:08.160134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.163 [2024-07-15 11:55:08.169568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.163 [2024-07-15 11:55:08.169592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.163 [2024-07-15 11:55:08.169602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.163 [2024-07-15 11:55:08.179300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.163 [2024-07-15 11:55:08.179325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.163 [2024-07-15 11:55:08.179336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.163 [2024-07-15 11:55:08.188746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.163 [2024-07-15 11:55:08.188770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.163 [2024-07-15 11:55:08.188782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.163 [2024-07-15 11:55:08.201545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.163 [2024-07-15 11:55:08.201568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.163 [2024-07-15 11:55:08.201582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.163 [2024-07-15 11:55:08.215599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.163 [2024-07-15 11:55:08.215623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.163 [2024-07-15 11:55:08.215634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.163 [2024-07-15 11:55:08.229941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.163 [2024-07-15 11:55:08.229965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.163 [2024-07-15 11:55:08.229976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.163 [2024-07-15 11:55:08.243039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.163 [2024-07-15 11:55:08.243063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.163 [2024-07-15 11:55:08.243075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.163 [2024-07-15 11:55:08.256111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.163 [2024-07-15 11:55:08.256135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.163 [2024-07-15 11:55:08.256145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.163 [2024-07-15 11:55:08.266222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.163 [2024-07-15 11:55:08.266246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.163 [2024-07-15 11:55:08.266256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.275317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.275340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.275350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.282824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.282852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.282863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.289790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.289813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.289826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.296583] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.296611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.296623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.303176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.303199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.303210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.309691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.309714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.309725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.316258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.316281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.316292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.322825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.322853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.322864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.329396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.329419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.329430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.335925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.335948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.335959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.342505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.342527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.342539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.349037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.349060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.349072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.355575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.355598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.355609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.362258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.362281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.362292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.368765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.368789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.368799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.375265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.375288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.375299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.381813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.381841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.381852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.388341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.388365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.388376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.394806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.394828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.394846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.401377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.401400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.401411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.407947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.407971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.407984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.414475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.414498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.414509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.424 [2024-07-15 11:55:08.421008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.424 [2024-07-15 11:55:08.421032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.424 [2024-07-15 11:55:08.421042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.425 [2024-07-15 11:55:08.427543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.425 [2024-07-15 11:55:08.427567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.425 [2024-07-15 11:55:08.427577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.425 [2024-07-15 11:55:08.434025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.425 [2024-07-15 11:55:08.434049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.425 [2024-07-15 11:55:08.434059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.425 [2024-07-15 11:55:08.440570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.425 [2024-07-15 11:55:08.440592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.425 [2024-07-15 11:55:08.440603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.425 [2024-07-15 11:55:08.447121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.425 [2024-07-15 11:55:08.447143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.425 [2024-07-15 11:55:08.447154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.425 [2024-07-15 11:55:08.453680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.425 [2024-07-15 11:55:08.453703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.425 [2024-07-15 11:55:08.453714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.425 [2024-07-15 11:55:08.460242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.425 [2024-07-15 11:55:08.460265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.425 [2024-07-15 11:55:08.460275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.425 [2024-07-15 11:55:08.466837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.425 [2024-07-15 11:55:08.466865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.425 [2024-07-15 11:55:08.466875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.425 [2024-07-15 11:55:08.473438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.425 [2024-07-15 11:55:08.473461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.425 [2024-07-15 11:55:08.473471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.425 [2024-07-15 11:55:08.479947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.425 [2024-07-15 11:55:08.479971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.425 [2024-07-15 11:55:08.479982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.425 [2024-07-15 11:55:08.487181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.425 [2024-07-15 11:55:08.487204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.425 [2024-07-15 11:55:08.487215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.425 [2024-07-15 11:55:08.493739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.425 [2024-07-15 11:55:08.493762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.425 [2024-07-15 11:55:08.493772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.425 [2024-07-15 11:55:08.500243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.425 [2024-07-15 11:55:08.500267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.425 [2024-07-15 11:55:08.500278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.425 [2024-07-15 11:55:08.506773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.425 [2024-07-15 11:55:08.506796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.425 [2024-07-15 11:55:08.506806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.425 [2024-07-15 11:55:08.513333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.425 [2024-07-15 11:55:08.513356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.425 [2024-07-15 11:55:08.513368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.425 [2024-07-15 11:55:08.519884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.425 [2024-07-15 11:55:08.519907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.425 [2024-07-15 11:55:08.519918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.425 [2024-07-15 11:55:08.526537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.425 [2024-07-15 11:55:08.526560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.425 [2024-07-15 11:55:08.526570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.533164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.533188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.533202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.539736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.539757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.539767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.546322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.546345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.546357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.553070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.553093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.553103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.559599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.559622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.559633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.566156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.566179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.566190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.572672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.572695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.572706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.579218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.579241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.579255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.585721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.585745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.585756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.592249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.592272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.592282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.598759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.598783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.598793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.605284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.605308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.605318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.611910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.611932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.611944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.618415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.618438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.618449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.624916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.624939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.624950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.631456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.631479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.631489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.638000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.638023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.638034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.644602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.644623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.644635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.651145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.651168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.651179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.657670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.657693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.657705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.664134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.664157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.664168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.670628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.670652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.670663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.677036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.677059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.677069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.683491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.683514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.683524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:40.686 [2024-07-15 11:55:08.689928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1aed9d0) 00:28:40.686 [2024-07-15 11:55:08.689950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:40.686 [2024-07-15 11:55:08.689964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:40.686 00:28:40.686 Latency(us) 00:28:40.686 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.686 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:40.686 nvme0n1 : 2.00 4175.00 521.88 0.00 0.00 3829.48 838.86 16148.07 00:28:40.686 =================================================================================================================== 00:28:40.686 Total : 4175.00 521.88 0.00 0.00 3829.48 838.86 16148.07 00:28:40.686 0 00:28:40.686 11:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:40.686 11:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:40.686 11:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:40.686 11:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:40.686 | .driver_specific 00:28:40.686 | .nvme_error 00:28:40.686 | .status_code 00:28:40.687 | .command_transient_transport_error' 00:28:40.946 11:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 269 > 0 )) 00:28:40.946 11:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2125021 00:28:40.946 11:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2125021 ']' 00:28:40.946 11:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2125021 00:28:40.946 11:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:40.946 11:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:40.946 11:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2125021 00:28:40.946 11:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:40.946 11:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:40.946 11:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2125021' 00:28:40.946 killing process with pid 2125021 00:28:40.946 11:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2125021 00:28:40.946 Received shutdown signal, test time was about 2.000000 seconds 00:28:40.946 00:28:40.946 Latency(us) 00:28:40.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.946 =================================================================================================================== 00:28:40.946 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:40.946 11:55:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2125021 00:28:41.206 11:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:41.206 11:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:41.206 11:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:41.206 11:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:41.206 11:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:41.206 11:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2125582 00:28:41.206 11:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2125582 /var/tmp/bperf.sock 00:28:41.206 11:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:41.206 11:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2125582 ']' 00:28:41.206 11:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:41.206 11:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:41.206 11:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:41.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:41.206 11:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:41.206 11:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:41.206 [2024-07-15 11:55:09.173293] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:28:41.206 [2024-07-15 11:55:09.173367] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2125582 ] 00:28:41.206 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.206 [2024-07-15 11:55:09.243166] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.465 [2024-07-15 11:55:09.316227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.032 11:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:42.032 11:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:42.032 11:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:42.032 11:55:09 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:42.291 11:55:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:42.291 11:55:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.291 11:55:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:42.291 11:55:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.291 11:55:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.291 11:55:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.549 nvme0n1 00:28:42.549 11:55:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:42.549 11:55:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.549 11:55:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:42.549 11:55:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.549 11:55:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:42.549 11:55:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:42.808 Running I/O for 2 seconds... 00:28:42.808 [2024-07-15 11:55:10.671455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fd640 00:28:42.808 [2024-07-15 11:55:10.672341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.808 [2024-07-15 11:55:10.672369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.808 [2024-07-15 11:55:10.680897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.681111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.681136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.689973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.690168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.690191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.699172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.699387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.699408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.708361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.708591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.708612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.717532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.717760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.717780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.726646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.726878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.726899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.735907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.736140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.736162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.745244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.745478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.745498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.754364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.754588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.754613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.763525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.763752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.763773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.772633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.772873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.772895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.781743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.781983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.782005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.790867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.791092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.791112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.799963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.800188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.800208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.809063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.809287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.809308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.818308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.818534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.818555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.827650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.827884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.827905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.836994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.837239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.837261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.846359] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.846596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.846617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.855707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.855950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.855971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.864934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.865171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.865192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.874035] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.874261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.874281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.883145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.883376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.883398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.892274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.892499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.892519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.901344] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.901569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.901589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:42.809 [2024-07-15 11:55:10.910523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:42.809 [2024-07-15 11:55:10.910769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.809 [2024-07-15 11:55:10.910790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.069 [2024-07-15 11:55:10.919889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.069 [2024-07-15 11:55:10.920113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.069 [2024-07-15 11:55:10.920133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.069 [2024-07-15 11:55:10.928993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.069 [2024-07-15 11:55:10.929237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.069 [2024-07-15 11:55:10.929257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.069 [2024-07-15 11:55:10.938290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.069 [2024-07-15 11:55:10.938512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.069 [2024-07-15 11:55:10.938534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.069 [2024-07-15 11:55:10.947414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.069 [2024-07-15 11:55:10.947637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.069 [2024-07-15 11:55:10.947658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.069 [2024-07-15 11:55:10.956500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.069 [2024-07-15 11:55:10.956723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.069 [2024-07-15 11:55:10.956743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.069 [2024-07-15 11:55:10.965630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.069 [2024-07-15 11:55:10.965861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.069 [2024-07-15 11:55:10.965882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.069 [2024-07-15 11:55:10.974765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.069 [2024-07-15 11:55:10.974997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.070 [2024-07-15 11:55:10.975018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.070 [2024-07-15 11:55:10.983854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.070 [2024-07-15 11:55:10.984081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.070 [2024-07-15 11:55:10.984101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.070 [2024-07-15 11:55:10.992958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.070 [2024-07-15 11:55:10.993183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.070 [2024-07-15 11:55:10.993207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.070 [2024-07-15 11:55:11.002034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.070 [2024-07-15 11:55:11.002255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.070 [2024-07-15 11:55:11.002276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.070 [2024-07-15 11:55:11.011205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.070 [2024-07-15 11:55:11.011430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.070 [2024-07-15 11:55:11.011451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.070 [2024-07-15 11:55:11.020288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.070 [2024-07-15 11:55:11.020512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.070 [2024-07-15 11:55:11.020533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.070 [2024-07-15 11:55:11.029394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.070 [2024-07-15 11:55:11.029626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.070 [2024-07-15 11:55:11.029647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.070 [2024-07-15 11:55:11.038487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.070 [2024-07-15 11:55:11.038713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.070 [2024-07-15 11:55:11.038734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.070 [2024-07-15 11:55:11.047611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.070 [2024-07-15 11:55:11.047840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.070 [2024-07-15 11:55:11.047860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.070 [2024-07-15 11:55:11.056795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.070 [2024-07-15 11:55:11.057035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.070 [2024-07-15 11:55:11.057055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.070 [2024-07-15 11:55:11.065917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.070 [2024-07-15 11:55:11.066147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.070 [2024-07-15 11:55:11.066167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.070 [2024-07-15 11:55:11.074995] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.070 [2024-07-15 11:55:11.075232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.070 [2024-07-15 11:55:11.075252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.070 [2024-07-15 11:55:11.084090] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.070 [2024-07-15 11:55:11.084313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.070 [2024-07-15 11:55:11.084334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.070 [2024-07-15 11:55:11.093158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.070 [2024-07-15 11:55:11.093382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.070 [2024-07-15 11:55:11.093402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.070 [2024-07-15 11:55:11.102236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.070 [2024-07-15 11:55:11.102452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.070 [2024-07-15 11:55:11.102472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.070 [2024-07-15 11:55:11.111474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.070 [2024-07-15 11:55:11.111701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.070 [2024-07-15 11:55:11.111722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.070 [2024-07-15 11:55:11.120530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.070 [2024-07-15 11:55:11.120754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.070 [2024-07-15 11:55:11.120775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.070 [2024-07-15 11:55:11.129636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.070 [2024-07-15 11:55:11.129864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.070 [2024-07-15 11:55:11.129884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.070 [2024-07-15 11:55:11.138652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.070 [2024-07-15 11:55:11.138880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.070 [2024-07-15 11:55:11.138902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.070 [2024-07-15 11:55:11.147695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.070 [2024-07-15 11:55:11.147916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.070 [2024-07-15 11:55:11.147938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.070 [2024-07-15 11:55:11.156782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.070 [2024-07-15 11:55:11.157026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.070 [2024-07-15 11:55:11.157047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.070 [2024-07-15 11:55:11.165900] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.070 [2024-07-15 11:55:11.166126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.070 [2024-07-15 11:55:11.166146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.330 [2024-07-15 11:55:11.175238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.330 [2024-07-15 11:55:11.175471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.330 [2024-07-15 11:55:11.175493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.330 [2024-07-15 11:55:11.184588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.330 [2024-07-15 11:55:11.184845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.330 [2024-07-15 11:55:11.184867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.330 [2024-07-15 11:55:11.193928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.194167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.194189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.203239] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.203467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.203488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.212593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.212830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.212854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.221842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.222076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.222097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.230962] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.231191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.231215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.240022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.240248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.240269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.249118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.249341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.249362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.258199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.258424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.258444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.267303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.267526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.267547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.276353] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.276578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.276598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.285459] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.285685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.285705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.294752] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.294984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:20619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.295005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.303845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.304071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.304091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.312916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.313154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.313174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.321991] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.322215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.322236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.331071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.331295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.331316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.340130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.340352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.340373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.349199] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.349423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.349444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.358305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.358529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.358549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.367252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.367482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.367503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.376340] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.376563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.376584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.385445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.385674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.385695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.394522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.394744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.394764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.403720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.403951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.331 [2024-07-15 11:55:11.403972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.331 [2024-07-15 11:55:11.412800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.331 [2024-07-15 11:55:11.413033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.332 [2024-07-15 11:55:11.413054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.332 [2024-07-15 11:55:11.421881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.332 [2024-07-15 11:55:11.422105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.332 [2024-07-15 11:55:11.422125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.332 [2024-07-15 11:55:11.431010] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.332 [2024-07-15 11:55:11.431238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.332 [2024-07-15 11:55:11.431259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.440369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.440620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.440641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.449609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.449839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.449860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.458707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.458943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.458963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.467779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.468011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.468035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.476867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.477091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.477111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.485961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.486185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.486206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.495044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.495268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.495288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.504117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.504339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.504359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.513170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.513396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.513417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.522223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.522451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.522471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.531312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.531535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.531555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.540374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.540599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.540619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.549452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.549678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.549699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.558540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.558774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.558795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.567731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.567974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.567995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.576828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.577058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.577079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.586099] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.586324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.586344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.595194] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.595428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.595449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.604191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.604415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.604435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.613275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.613496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.613516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.622351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.622575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.622595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.631401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.631633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.631654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.640478] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.640703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.640724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.649558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.649783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.649803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.658629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.658866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.658886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.667686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.667914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.591 [2024-07-15 11:55:11.667934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.591 [2024-07-15 11:55:11.676719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.591 [2024-07-15 11:55:11.676977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.592 [2024-07-15 11:55:11.676997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.592 [2024-07-15 11:55:11.685777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.592 [2024-07-15 11:55:11.686008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.592 [2024-07-15 11:55:11.686028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.592 [2024-07-15 11:55:11.694992] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.592 [2024-07-15 11:55:11.695252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.592 [2024-07-15 11:55:11.695273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.851 [2024-07-15 11:55:11.704235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.851 [2024-07-15 11:55:11.704469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.851 [2024-07-15 11:55:11.704494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.851 [2024-07-15 11:55:11.713300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.851 [2024-07-15 11:55:11.713531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.851 [2024-07-15 11:55:11.713552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.851 [2024-07-15 11:55:11.722385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.851 [2024-07-15 11:55:11.722610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.851 [2024-07-15 11:55:11.722631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.851 [2024-07-15 11:55:11.731462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.851 [2024-07-15 11:55:11.731684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.851 [2024-07-15 11:55:11.731706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.851 [2024-07-15 11:55:11.740536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.851 [2024-07-15 11:55:11.740759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.851 [2024-07-15 11:55:11.740780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.851 [2024-07-15 11:55:11.749618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.851 [2024-07-15 11:55:11.749845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.851 [2024-07-15 11:55:11.749865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.851 [2024-07-15 11:55:11.758705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.851 [2024-07-15 11:55:11.758934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.851 [2024-07-15 11:55:11.758955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.851 [2024-07-15 11:55:11.767695] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.851 [2024-07-15 11:55:11.767926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.851 [2024-07-15 11:55:11.767946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.851 [2024-07-15 11:55:11.776762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.851 [2024-07-15 11:55:11.777003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.851 [2024-07-15 11:55:11.777024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.851 [2024-07-15 11:55:11.785815] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.851 [2024-07-15 11:55:11.786055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.851 [2024-07-15 11:55:11.786075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.851 [2024-07-15 11:55:11.794915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.851 [2024-07-15 11:55:11.795137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.851 [2024-07-15 11:55:11.795157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.851 [2024-07-15 11:55:11.803999] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.851 [2024-07-15 11:55:11.804225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.851 [2024-07-15 11:55:11.804245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.851 [2024-07-15 11:55:11.813051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.851 [2024-07-15 11:55:11.813278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.851 [2024-07-15 11:55:11.813298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.851 [2024-07-15 11:55:11.822139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.851 [2024-07-15 11:55:11.822362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.851 [2024-07-15 11:55:11.822383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.851 [2024-07-15 11:55:11.831195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.851 [2024-07-15 11:55:11.831422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.851 [2024-07-15 11:55:11.831442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.851 [2024-07-15 11:55:11.840269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.851 [2024-07-15 11:55:11.840494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.851 [2024-07-15 11:55:11.840515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.851 [2024-07-15 11:55:11.849339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.851 [2024-07-15 11:55:11.849565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.852 [2024-07-15 11:55:11.849585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.852 [2024-07-15 11:55:11.858424] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.852 [2024-07-15 11:55:11.858649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.852 [2024-07-15 11:55:11.858669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.852 [2024-07-15 11:55:11.867512] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.852 [2024-07-15 11:55:11.867739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.852 [2024-07-15 11:55:11.867759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.852 [2024-07-15 11:55:11.876598] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.852 [2024-07-15 11:55:11.876823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.852 [2024-07-15 11:55:11.876848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.852 [2024-07-15 11:55:11.885698] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.852 [2024-07-15 11:55:11.885924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.852 [2024-07-15 11:55:11.885945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.852 [2024-07-15 11:55:11.894778] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.852 [2024-07-15 11:55:11.895011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.852 [2024-07-15 11:55:11.895031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.852 [2024-07-15 11:55:11.903874] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.852 [2024-07-15 11:55:11.904099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.852 [2024-07-15 11:55:11.904120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.852 [2024-07-15 11:55:11.912971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.852 [2024-07-15 11:55:11.913215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.852 [2024-07-15 11:55:11.913236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.852 [2024-07-15 11:55:11.922015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.852 [2024-07-15 11:55:11.922237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.852 [2024-07-15 11:55:11.922257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.852 [2024-07-15 11:55:11.931100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.852 [2024-07-15 11:55:11.931319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.852 [2024-07-15 11:55:11.931339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.852 [2024-07-15 11:55:11.940200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.852 [2024-07-15 11:55:11.940418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.852 [2024-07-15 11:55:11.940442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:43.852 [2024-07-15 11:55:11.949283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:43.852 [2024-07-15 11:55:11.949521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:43.852 [2024-07-15 11:55:11.949542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.112 [2024-07-15 11:55:11.958607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.112 [2024-07-15 11:55:11.958840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.112 [2024-07-15 11:55:11.958862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.112 [2024-07-15 11:55:11.967789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.112 [2024-07-15 11:55:11.968019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.112 [2024-07-15 11:55:11.968040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.112 [2024-07-15 11:55:11.976879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.112 [2024-07-15 11:55:11.977123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.112 [2024-07-15 11:55:11.977143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.112 [2024-07-15 11:55:11.985972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.112 [2024-07-15 11:55:11.986212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.112 [2024-07-15 11:55:11.986233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.112 [2024-07-15 11:55:11.995084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.112 [2024-07-15 11:55:11.995302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.112 [2024-07-15 11:55:11.995321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.112 [2024-07-15 11:55:12.004170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.112 [2024-07-15 11:55:12.004386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.112 [2024-07-15 11:55:12.004407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.112 [2024-07-15 11:55:12.013282] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.112 [2024-07-15 11:55:12.013498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.112 [2024-07-15 11:55:12.013519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.112 [2024-07-15 11:55:12.022334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.112 [2024-07-15 11:55:12.022566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.112 [2024-07-15 11:55:12.022586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.112 [2024-07-15 11:55:12.031464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.112 [2024-07-15 11:55:12.031695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.112 [2024-07-15 11:55:12.031716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.112 [2024-07-15 11:55:12.040594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.112 [2024-07-15 11:55:12.040817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.112 [2024-07-15 11:55:12.040843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.112 [2024-07-15 11:55:12.049672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.112 [2024-07-15 11:55:12.049901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.112 [2024-07-15 11:55:12.049923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.112 [2024-07-15 11:55:12.058788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.112 [2024-07-15 11:55:12.059019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.112 [2024-07-15 11:55:12.059040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.112 [2024-07-15 11:55:12.067878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.112 [2024-07-15 11:55:12.068104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.112 [2024-07-15 11:55:12.068124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.112 [2024-07-15 11:55:12.076934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.112 [2024-07-15 11:55:12.077177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.112 [2024-07-15 11:55:12.077198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.112 [2024-07-15 11:55:12.086062] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.112 [2024-07-15 11:55:12.086304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.112 [2024-07-15 11:55:12.086325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.112 [2024-07-15 11:55:12.095112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.112 [2024-07-15 11:55:12.095338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.112 [2024-07-15 11:55:12.095359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.112 [2024-07-15 11:55:12.104190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.112 [2024-07-15 11:55:12.104414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.112 [2024-07-15 11:55:12.104434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.112 [2024-07-15 11:55:12.113264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.113 [2024-07-15 11:55:12.113504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.113 [2024-07-15 11:55:12.113524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.113 [2024-07-15 11:55:12.122369] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.113 [2024-07-15 11:55:12.122592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:15510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.113 [2024-07-15 11:55:12.122613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.113 [2024-07-15 11:55:12.131457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.113 [2024-07-15 11:55:12.131681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.113 [2024-07-15 11:55:12.131701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.113 [2024-07-15 11:55:12.140545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.113 [2024-07-15 11:55:12.140770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.113 [2024-07-15 11:55:12.140791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.113 [2024-07-15 11:55:12.149612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.113 [2024-07-15 11:55:12.149841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.113 [2024-07-15 11:55:12.149862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.113 [2024-07-15 11:55:12.158712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.113 [2024-07-15 11:55:12.158938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.113 [2024-07-15 11:55:12.158959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.113 [2024-07-15 11:55:12.167796] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.113 [2024-07-15 11:55:12.168047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.113 [2024-07-15 11:55:12.168068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.113 [2024-07-15 11:55:12.176879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.113 [2024-07-15 11:55:12.177096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.113 [2024-07-15 11:55:12.177119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.113 [2024-07-15 11:55:12.185968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.113 [2024-07-15 11:55:12.186186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.113 [2024-07-15 11:55:12.186207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.113 [2024-07-15 11:55:12.195051] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.113 [2024-07-15 11:55:12.195293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.113 [2024-07-15 11:55:12.195314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.113 [2024-07-15 11:55:12.204139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.113 [2024-07-15 11:55:12.204413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.113 [2024-07-15 11:55:12.204433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.113 [2024-07-15 11:55:12.213469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.113 [2024-07-15 11:55:12.213700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.113 [2024-07-15 11:55:12.213721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.373 [2024-07-15 11:55:12.222849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.373 [2024-07-15 11:55:12.223078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.373 [2024-07-15 11:55:12.223099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.373 [2024-07-15 11:55:12.231947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.373 [2024-07-15 11:55:12.232166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.373 [2024-07-15 11:55:12.232187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.373 [2024-07-15 11:55:12.241012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.373 [2024-07-15 11:55:12.241226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.373 [2024-07-15 11:55:12.241247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.373 [2024-07-15 11:55:12.250021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.373 [2024-07-15 11:55:12.250262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.373 [2024-07-15 11:55:12.250283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.373 [2024-07-15 11:55:12.259143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.373 [2024-07-15 11:55:12.259362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.373 [2024-07-15 11:55:12.259383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.373 [2024-07-15 11:55:12.268193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.373 [2024-07-15 11:55:12.268419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.373 [2024-07-15 11:55:12.268440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.373 [2024-07-15 11:55:12.277270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.373 [2024-07-15 11:55:12.277493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.373 [2024-07-15 11:55:12.277513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.373 [2024-07-15 11:55:12.286356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.373 [2024-07-15 11:55:12.286583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.373 [2024-07-15 11:55:12.286604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.373 [2024-07-15 11:55:12.295644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.373 [2024-07-15 11:55:12.295874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.373 [2024-07-15 11:55:12.295894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.373 [2024-07-15 11:55:12.304756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.373 [2024-07-15 11:55:12.305001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.373 [2024-07-15 11:55:12.305022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.373 [2024-07-15 11:55:12.313800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.373 [2024-07-15 11:55:12.314039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.373 [2024-07-15 11:55:12.314059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.373 [2024-07-15 11:55:12.322904] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.373 [2024-07-15 11:55:12.323128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.373 [2024-07-15 11:55:12.323148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.373 [2024-07-15 11:55:12.332004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.373 [2024-07-15 11:55:12.332233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.373 [2024-07-15 11:55:12.332254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.373 [2024-07-15 11:55:12.341336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.373 [2024-07-15 11:55:12.341562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.373 [2024-07-15 11:55:12.341584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.373 [2024-07-15 11:55:12.350488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.373 [2024-07-15 11:55:12.350721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.373 [2024-07-15 11:55:12.350742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.373 [2024-07-15 11:55:12.359610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.373 [2024-07-15 11:55:12.359839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.373 [2024-07-15 11:55:12.359860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.373 [2024-07-15 11:55:12.368694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.373 [2024-07-15 11:55:12.368927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.373 [2024-07-15 11:55:12.368948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.373 [2024-07-15 11:55:12.377806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.373 [2024-07-15 11:55:12.378042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.373 [2024-07-15 11:55:12.378062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.373 [2024-07-15 11:55:12.386899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.373 [2024-07-15 11:55:12.387125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.374 [2024-07-15 11:55:12.387146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.374 [2024-07-15 11:55:12.395984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.374 [2024-07-15 11:55:12.396208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.374 [2024-07-15 11:55:12.396229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.374 [2024-07-15 11:55:12.405069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.374 [2024-07-15 11:55:12.405294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.374 [2024-07-15 11:55:12.405314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.374 [2024-07-15 11:55:12.414169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.374 [2024-07-15 11:55:12.414401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.374 [2024-07-15 11:55:12.414426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.374 [2024-07-15 11:55:12.423299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.374 [2024-07-15 11:55:12.423527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.374 [2024-07-15 11:55:12.423549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.374 [2024-07-15 11:55:12.432361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.374 [2024-07-15 11:55:12.432588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.374 [2024-07-15 11:55:12.432609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.374 [2024-07-15 11:55:12.441451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.374 [2024-07-15 11:55:12.441683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.374 [2024-07-15 11:55:12.441704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.374 [2024-07-15 11:55:12.450515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.374 [2024-07-15 11:55:12.450743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.374 [2024-07-15 11:55:12.450763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.374 [2024-07-15 11:55:12.459608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.374 [2024-07-15 11:55:12.459847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.374 [2024-07-15 11:55:12.459868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.374 [2024-07-15 11:55:12.468795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.374 [2024-07-15 11:55:12.469027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.374 [2024-07-15 11:55:12.469049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.634 [2024-07-15 11:55:12.478044] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.634 [2024-07-15 11:55:12.478273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.634 [2024-07-15 11:55:12.478294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.634 [2024-07-15 11:55:12.487315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.634 [2024-07-15 11:55:12.487549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.634 [2024-07-15 11:55:12.487570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.634 [2024-07-15 11:55:12.496413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.634 [2024-07-15 11:55:12.496640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.634 [2024-07-15 11:55:12.496661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.634 [2024-07-15 11:55:12.505487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.634 [2024-07-15 11:55:12.505712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.634 [2024-07-15 11:55:12.505732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.634 [2024-07-15 11:55:12.514550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.634 [2024-07-15 11:55:12.514774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.634 [2024-07-15 11:55:12.514795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.634 [2024-07-15 11:55:12.523627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.634 [2024-07-15 11:55:12.523858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.634 [2024-07-15 11:55:12.523879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.634 [2024-07-15 11:55:12.532735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.634 [2024-07-15 11:55:12.532967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.634 [2024-07-15 11:55:12.532988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.634 [2024-07-15 11:55:12.541824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.634 [2024-07-15 11:55:12.542064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.634 [2024-07-15 11:55:12.542085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.634 [2024-07-15 11:55:12.550930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.634 [2024-07-15 11:55:12.551154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.634 [2024-07-15 11:55:12.551175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.634 [2024-07-15 11:55:12.560020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.634 [2024-07-15 11:55:12.560245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.634 [2024-07-15 11:55:12.560266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.634 [2024-07-15 11:55:12.569115] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.634 [2024-07-15 11:55:12.569345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.634 [2024-07-15 11:55:12.569365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.634 [2024-07-15 11:55:12.578300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.634 [2024-07-15 11:55:12.578523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.634 [2024-07-15 11:55:12.578544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.634 [2024-07-15 11:55:12.587400] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.634 [2024-07-15 11:55:12.587625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.634 [2024-07-15 11:55:12.587646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.634 [2024-07-15 11:55:12.596510] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.634 [2024-07-15 11:55:12.596736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.634 [2024-07-15 11:55:12.596757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.634 [2024-07-15 11:55:12.605583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.634 [2024-07-15 11:55:12.605807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.634 [2024-07-15 11:55:12.605828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.634 [2024-07-15 11:55:12.614862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.634 [2024-07-15 11:55:12.615089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.634 [2024-07-15 11:55:12.615110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.634 [2024-07-15 11:55:12.623961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.634 [2024-07-15 11:55:12.624185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.634 [2024-07-15 11:55:12.624205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.634 [2024-07-15 11:55:12.633072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.634 [2024-07-15 11:55:12.633297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.634 [2024-07-15 11:55:12.633317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.634 [2024-07-15 11:55:12.642158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.634 [2024-07-15 11:55:12.642383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.634 [2024-07-15 11:55:12.642404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.634 [2024-07-15 11:55:12.651252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.634 [2024-07-15 11:55:12.651479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.634 [2024-07-15 11:55:12.651503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.634 [2024-07-15 11:55:12.660336] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15586f0) with pdu=0x2000190fbcf0 00:28:44.634 [2024-07-15 11:55:12.660561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:44.634 [2024-07-15 11:55:12.660581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:44.634 00:28:44.634 Latency(us) 00:28:44.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.634 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:44.634 nvme0n1 : 2.00 27948.87 109.18 0.00 0.00 4572.07 2123.37 10066.33 00:28:44.634 =================================================================================================================== 00:28:44.634 Total : 27948.87 109.18 0.00 0.00 4572.07 2123.37 10066.33 00:28:44.634 0 00:28:44.634 11:55:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:44.634 11:55:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:44.634 11:55:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:44.634 11:55:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:44.634 | .driver_specific 00:28:44.634 | .nvme_error 00:28:44.634 | .status_code 00:28:44.634 | .command_transient_transport_error' 00:28:44.894 11:55:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 219 > 0 )) 00:28:44.894 11:55:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2125582 00:28:44.894 11:55:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2125582 ']' 00:28:44.894 11:55:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2125582 00:28:44.894 11:55:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:44.894 11:55:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:44.894 11:55:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2125582 00:28:44.894 11:55:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:44.894 11:55:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:44.894 11:55:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2125582' 00:28:44.894 killing process with pid 2125582 00:28:44.894 11:55:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2125582 00:28:44.894 Received shutdown signal, test time was about 2.000000 seconds 00:28:44.894 00:28:44.894 Latency(us) 00:28:44.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.894 =================================================================================================================== 00:28:44.894 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:44.894 11:55:12 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2125582 00:28:45.154 11:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:45.154 11:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:45.154 11:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:45.154 11:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:45.154 11:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:45.154 11:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2126362 00:28:45.154 11:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2126362 /var/tmp/bperf.sock 00:28:45.154 11:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:45.154 11:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2126362 ']' 00:28:45.154 11:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:45.154 11:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:45.154 11:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:45.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:45.154 11:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:45.154 11:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:45.154 [2024-07-15 11:55:13.145953] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:28:45.154 [2024-07-15 11:55:13.146005] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2126362 ] 00:28:45.154 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:45.154 Zero copy mechanism will not be used. 00:28:45.154 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.154 [2024-07-15 11:55:13.214671] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.412 [2024-07-15 11:55:13.278487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.040 11:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:46.040 11:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:28:46.040 11:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:46.040 11:55:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:46.040 11:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:46.040 11:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.040 11:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:46.298 11:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.298 11:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:46.298 11:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:46.557 nvme0n1 00:28:46.557 11:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:46.557 11:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.557 11:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:46.557 11:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.557 11:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:46.557 11:55:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:46.557 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:46.557 Zero copy mechanism will not be used. 00:28:46.557 Running I/O for 2 seconds... 00:28:46.558 [2024-07-15 11:55:14.647820] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.558 [2024-07-15 11:55:14.648225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.558 [2024-07-15 11:55:14.648256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.558 [2024-07-15 11:55:14.659945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.558 [2024-07-15 11:55:14.660309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.558 [2024-07-15 11:55:14.660336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.818 [2024-07-15 11:55:14.668487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.818 [2024-07-15 11:55:14.668809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.818 [2024-07-15 11:55:14.668838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.818 [2024-07-15 11:55:14.676083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.818 [2024-07-15 11:55:14.676483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.818 [2024-07-15 11:55:14.676507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.818 [2024-07-15 11:55:14.688431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.818 [2024-07-15 11:55:14.688814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.818 [2024-07-15 11:55:14.688842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.818 [2024-07-15 11:55:14.702200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.818 [2024-07-15 11:55:14.702532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.818 [2024-07-15 11:55:14.702555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.818 [2024-07-15 11:55:14.711163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.818 [2024-07-15 11:55:14.711508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.818 [2024-07-15 11:55:14.711531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.818 [2024-07-15 11:55:14.718996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.818 [2024-07-15 11:55:14.719339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.818 [2024-07-15 11:55:14.719361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.818 [2024-07-15 11:55:14.726421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.818 [2024-07-15 11:55:14.726742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.818 [2024-07-15 11:55:14.726764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.818 [2024-07-15 11:55:14.739521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.818 [2024-07-15 11:55:14.739895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.818 [2024-07-15 11:55:14.739917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.818 [2024-07-15 11:55:14.755694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.818 [2024-07-15 11:55:14.756040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.818 [2024-07-15 11:55:14.756062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.818 [2024-07-15 11:55:14.766740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.818 [2024-07-15 11:55:14.767080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.818 [2024-07-15 11:55:14.767102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.818 [2024-07-15 11:55:14.775650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.818 [2024-07-15 11:55:14.775986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.818 [2024-07-15 11:55:14.776007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.818 [2024-07-15 11:55:14.783568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.818 [2024-07-15 11:55:14.783910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.818 [2024-07-15 11:55:14.783931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.818 [2024-07-15 11:55:14.791243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.818 [2024-07-15 11:55:14.791575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.818 [2024-07-15 11:55:14.791597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.818 [2024-07-15 11:55:14.800100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.818 [2024-07-15 11:55:14.800423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.818 [2024-07-15 11:55:14.800444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.818 [2024-07-15 11:55:14.808582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.818 [2024-07-15 11:55:14.808932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.818 [2024-07-15 11:55:14.808957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.818 [2024-07-15 11:55:14.816722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.818 [2024-07-15 11:55:14.817063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.818 [2024-07-15 11:55:14.817084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.818 [2024-07-15 11:55:14.824584] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.818 [2024-07-15 11:55:14.824914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.818 [2024-07-15 11:55:14.824935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.818 [2024-07-15 11:55:14.832903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.818 [2024-07-15 11:55:14.833306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.818 [2024-07-15 11:55:14.833328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.818 [2024-07-15 11:55:14.842045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.818 [2024-07-15 11:55:14.842386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.818 [2024-07-15 11:55:14.842408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.818 [2024-07-15 11:55:14.851292] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.818 [2024-07-15 11:55:14.851636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.818 [2024-07-15 11:55:14.851658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.818 [2024-07-15 11:55:14.868827] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.818 [2024-07-15 11:55:14.869205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.819 [2024-07-15 11:55:14.869226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.819 [2024-07-15 11:55:14.879944] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.819 [2024-07-15 11:55:14.880291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.819 [2024-07-15 11:55:14.880312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.819 [2024-07-15 11:55:14.890137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.819 [2024-07-15 11:55:14.890479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.819 [2024-07-15 11:55:14.890499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.819 [2024-07-15 11:55:14.898643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.819 [2024-07-15 11:55:14.898756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.819 [2024-07-15 11:55:14.898775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.819 [2024-07-15 11:55:14.906949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.819 [2024-07-15 11:55:14.907286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.819 [2024-07-15 11:55:14.907311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.819 [2024-07-15 11:55:14.916313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:46.819 [2024-07-15 11:55:14.916647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.819 [2024-07-15 11:55:14.916668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:14.925120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:14.925477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.079 [2024-07-15 11:55:14.925499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:14.932626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:14.932956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.079 [2024-07-15 11:55:14.932977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:14.941005] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:14.941343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.079 [2024-07-15 11:55:14.941365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:14.949889] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:14.950215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.079 [2024-07-15 11:55:14.950236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:14.958354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:14.958692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.079 [2024-07-15 11:55:14.958713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:14.966566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:14.966710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.079 [2024-07-15 11:55:14.966730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:14.974551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:14.974870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.079 [2024-07-15 11:55:14.974891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:14.981538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:14.981850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.079 [2024-07-15 11:55:14.981871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:14.988987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:14.989286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.079 [2024-07-15 11:55:14.989306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:14.995980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:14.996325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.079 [2024-07-15 11:55:14.996346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:15.004082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:15.004467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.079 [2024-07-15 11:55:15.004488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:15.011465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:15.011814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.079 [2024-07-15 11:55:15.011841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:15.019500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:15.019867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.079 [2024-07-15 11:55:15.019888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:15.026775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:15.027137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.079 [2024-07-15 11:55:15.027158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:15.035323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:15.035683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.079 [2024-07-15 11:55:15.035707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:15.042604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:15.042950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.079 [2024-07-15 11:55:15.042972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:15.050551] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:15.050932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.079 [2024-07-15 11:55:15.050954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:15.057277] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:15.057665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.079 [2024-07-15 11:55:15.057685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:15.065960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:15.066322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.079 [2024-07-15 11:55:15.066343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:15.073770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:15.074116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.079 [2024-07-15 11:55:15.074138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:15.081045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:15.081382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.079 [2024-07-15 11:55:15.081403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:15.088023] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:15.088355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.079 [2024-07-15 11:55:15.088377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.079 [2024-07-15 11:55:15.095619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.079 [2024-07-15 11:55:15.095933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.080 [2024-07-15 11:55:15.095954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.080 [2024-07-15 11:55:15.102783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.080 [2024-07-15 11:55:15.103107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.080 [2024-07-15 11:55:15.103128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.080 [2024-07-15 11:55:15.109690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.080 [2024-07-15 11:55:15.110059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.080 [2024-07-15 11:55:15.110082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.080 [2024-07-15 11:55:15.116608] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.080 [2024-07-15 11:55:15.116977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.080 [2024-07-15 11:55:15.116999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.080 [2024-07-15 11:55:15.124705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.080 [2024-07-15 11:55:15.125018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.080 [2024-07-15 11:55:15.125039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.080 [2024-07-15 11:55:15.131976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.080 [2024-07-15 11:55:15.132321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.080 [2024-07-15 11:55:15.132342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.080 [2024-07-15 11:55:15.140878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.080 [2024-07-15 11:55:15.141266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.080 [2024-07-15 11:55:15.141286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.080 [2024-07-15 11:55:15.148398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.080 [2024-07-15 11:55:15.148716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.080 [2024-07-15 11:55:15.148737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.080 [2024-07-15 11:55:15.155116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.080 [2024-07-15 11:55:15.155515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.080 [2024-07-15 11:55:15.155537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.080 [2024-07-15 11:55:15.161968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.080 [2024-07-15 11:55:15.162293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.080 [2024-07-15 11:55:15.162318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.080 [2024-07-15 11:55:15.169267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.080 [2024-07-15 11:55:15.169667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.080 [2024-07-15 11:55:15.169689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.080 [2024-07-15 11:55:15.177160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.080 [2024-07-15 11:55:15.177454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.080 [2024-07-15 11:55:15.177476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.339 [2024-07-15 11:55:15.184720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.339 [2024-07-15 11:55:15.185037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.339 [2024-07-15 11:55:15.185058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.339 [2024-07-15 11:55:15.191708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.339 [2024-07-15 11:55:15.192024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.339 [2024-07-15 11:55:15.192045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.339 [2024-07-15 11:55:15.198793] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.339 [2024-07-15 11:55:15.199141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.339 [2024-07-15 11:55:15.199162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.339 [2024-07-15 11:55:15.205633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.339 [2024-07-15 11:55:15.206012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.339 [2024-07-15 11:55:15.206033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.339 [2024-07-15 11:55:15.212812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.339 [2024-07-15 11:55:15.213199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.339 [2024-07-15 11:55:15.213220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.339 [2024-07-15 11:55:15.219670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.339 [2024-07-15 11:55:15.219988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.220009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.227298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.227596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.227617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.234124] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.234423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.234444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.241565] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.241901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.241923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.248430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.248851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.248872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.255804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.256173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.256194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.263117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.263458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.263480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.269490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.269794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.269815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.275960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.276296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.276317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.283627] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.284003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.284023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.291580] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.292008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.292029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.299682] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.300064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.300085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.307786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.308179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.308199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.316191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.316541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.316562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.324165] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.324597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.324618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.332774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.333231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.333252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.341595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.341998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.342020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.350405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.350829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.350865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.359186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.359545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.359570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.367325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.367804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.367825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.376033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.376505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.376526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.384613] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.385004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.385024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.393028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.393448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.393469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.401317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.401727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.401747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.409937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.410294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.410315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.418654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.419091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.419112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.427435] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.427883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.427904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.340 [2024-07-15 11:55:15.435914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.340 [2024-07-15 11:55:15.436308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.340 [2024-07-15 11:55:15.436329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.600 [2024-07-15 11:55:15.444760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.600 [2024-07-15 11:55:15.445227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.600 [2024-07-15 11:55:15.445249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.600 [2024-07-15 11:55:15.453264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.600 [2024-07-15 11:55:15.453730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.600 [2024-07-15 11:55:15.453751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.600 [2024-07-15 11:55:15.461500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.600 [2024-07-15 11:55:15.461922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.600 [2024-07-15 11:55:15.461943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.600 [2024-07-15 11:55:15.470046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.600 [2024-07-15 11:55:15.470453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.600 [2024-07-15 11:55:15.470474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.600 [2024-07-15 11:55:15.478534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.600 [2024-07-15 11:55:15.478899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.478921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.487012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.487461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.487483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.495658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.496037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.496058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.503928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.504308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.504330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.512569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.512910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.512931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.520826] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.521242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.521263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.529600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.529950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.529972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.537891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.538274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.538296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.546283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.546648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.546669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.555091] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.555492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.555514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.563172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.563568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.563590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.571506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.571928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.571949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.578779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.579146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.579170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.585967] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.586353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.586374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.592335] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.592634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.592655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.599075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.599403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.599425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.606358] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.606687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.606709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.613189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.613490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.613510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.619680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.620001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.620023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.626745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.627047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.627068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.633370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.633744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.633765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.639824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.640086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.640108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.645985] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.646243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.646265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.652604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.652896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.652916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.659145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.659501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.659524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.666800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.667170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.667192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.674735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.675111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.675133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.682958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.683325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.683346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.691209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.691473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.601 [2024-07-15 11:55:15.691495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.601 [2024-07-15 11:55:15.699318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.601 [2024-07-15 11:55:15.699650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.602 [2024-07-15 11:55:15.699675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.862 [2024-07-15 11:55:15.707470] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.862 [2024-07-15 11:55:15.707744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.862 [2024-07-15 11:55:15.707765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.862 [2024-07-15 11:55:15.715498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.862 [2024-07-15 11:55:15.715844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.862 [2024-07-15 11:55:15.715866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.862 [2024-07-15 11:55:15.724011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.862 [2024-07-15 11:55:15.724351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.862 [2024-07-15 11:55:15.724372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.862 [2024-07-15 11:55:15.732453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.862 [2024-07-15 11:55:15.732746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.862 [2024-07-15 11:55:15.732767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.862 [2024-07-15 11:55:15.740348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.862 [2024-07-15 11:55:15.740750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.862 [2024-07-15 11:55:15.740772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.862 [2024-07-15 11:55:15.748391] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.862 [2024-07-15 11:55:15.748773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.862 [2024-07-15 11:55:15.748794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.862 [2024-07-15 11:55:15.756909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.862 [2024-07-15 11:55:15.757243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.862 [2024-07-15 11:55:15.757264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.862 [2024-07-15 11:55:15.765272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.862 [2024-07-15 11:55:15.765568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.862 [2024-07-15 11:55:15.765589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.862 [2024-07-15 11:55:15.773544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.862 [2024-07-15 11:55:15.773874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.862 [2024-07-15 11:55:15.773896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.862 [2024-07-15 11:55:15.781522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.862 [2024-07-15 11:55:15.781818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.862 [2024-07-15 11:55:15.781845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.862 [2024-07-15 11:55:15.789622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.862 [2024-07-15 11:55:15.790022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.862 [2024-07-15 11:55:15.790043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.862 [2024-07-15 11:55:15.797357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.862 [2024-07-15 11:55:15.797697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.862 [2024-07-15 11:55:15.797718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.862 [2024-07-15 11:55:15.805223] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.862 [2024-07-15 11:55:15.805598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.862 [2024-07-15 11:55:15.805618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.862 [2024-07-15 11:55:15.813509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.862 [2024-07-15 11:55:15.813952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.862 [2024-07-15 11:55:15.813973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.862 [2024-07-15 11:55:15.821957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.862 [2024-07-15 11:55:15.822253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.862 [2024-07-15 11:55:15.822273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.862 [2024-07-15 11:55:15.829451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.863 [2024-07-15 11:55:15.829759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.863 [2024-07-15 11:55:15.829780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.863 [2024-07-15 11:55:15.835961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.863 [2024-07-15 11:55:15.836257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.863 [2024-07-15 11:55:15.836278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.863 [2024-07-15 11:55:15.842587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.863 [2024-07-15 11:55:15.842896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.863 [2024-07-15 11:55:15.842918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.863 [2024-07-15 11:55:15.849629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.863 [2024-07-15 11:55:15.849920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.863 [2024-07-15 11:55:15.849941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.863 [2024-07-15 11:55:15.856423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.863 [2024-07-15 11:55:15.856752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.863 [2024-07-15 11:55:15.856774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.863 [2024-07-15 11:55:15.863636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.863 [2024-07-15 11:55:15.863952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.863 [2024-07-15 11:55:15.863974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.863 [2024-07-15 11:55:15.870046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.863 [2024-07-15 11:55:15.870354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.863 [2024-07-15 11:55:15.870375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.863 [2024-07-15 11:55:15.876317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.863 [2024-07-15 11:55:15.876633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.863 [2024-07-15 11:55:15.876655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.863 [2024-07-15 11:55:15.883692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.863 [2024-07-15 11:55:15.884034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.863 [2024-07-15 11:55:15.884056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.863 [2024-07-15 11:55:15.891455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.863 [2024-07-15 11:55:15.891772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.863 [2024-07-15 11:55:15.891793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.863 [2024-07-15 11:55:15.899730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.863 [2024-07-15 11:55:15.900005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.863 [2024-07-15 11:55:15.900031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.863 [2024-07-15 11:55:15.907767] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.863 [2024-07-15 11:55:15.908152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.863 [2024-07-15 11:55:15.908173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.863 [2024-07-15 11:55:15.915902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.863 [2024-07-15 11:55:15.916260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.863 [2024-07-15 11:55:15.916282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.863 [2024-07-15 11:55:15.924148] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.863 [2024-07-15 11:55:15.924480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.863 [2024-07-15 11:55:15.924501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.863 [2024-07-15 11:55:15.932143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.863 [2024-07-15 11:55:15.932516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.863 [2024-07-15 11:55:15.932537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:47.863 [2024-07-15 11:55:15.940273] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.863 [2024-07-15 11:55:15.940653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.863 [2024-07-15 11:55:15.940676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:47.863 [2024-07-15 11:55:15.948579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.863 [2024-07-15 11:55:15.948928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.863 [2024-07-15 11:55:15.948950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:47.863 [2024-07-15 11:55:15.956031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.863 [2024-07-15 11:55:15.956383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.863 [2024-07-15 11:55:15.956405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:47.863 [2024-07-15 11:55:15.964143] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:47.863 [2024-07-15 11:55:15.964474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.863 [2024-07-15 11:55:15.964495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.123 [2024-07-15 11:55:15.972341] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.123 [2024-07-15 11:55:15.972672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.123 [2024-07-15 11:55:15.972694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.123 [2024-07-15 11:55:15.980477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.123 [2024-07-15 11:55:15.980785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.123 [2024-07-15 11:55:15.980807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.123 [2024-07-15 11:55:15.988694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.123 [2024-07-15 11:55:15.989077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.123 [2024-07-15 11:55:15.989099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.123 [2024-07-15 11:55:15.996865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.123 [2024-07-15 11:55:15.997237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.123 [2024-07-15 11:55:15.997257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.123 [2024-07-15 11:55:16.005015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.123 [2024-07-15 11:55:16.005388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.123 [2024-07-15 11:55:16.005409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.123 [2024-07-15 11:55:16.013808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.123 [2024-07-15 11:55:16.014193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.123 [2024-07-15 11:55:16.014214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.123 [2024-07-15 11:55:16.020845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.123 [2024-07-15 11:55:16.021262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.123 [2024-07-15 11:55:16.021283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.123 [2024-07-15 11:55:16.029370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.123 [2024-07-15 11:55:16.029626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.123 [2024-07-15 11:55:16.029648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.123 [2024-07-15 11:55:16.037270] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.123 [2024-07-15 11:55:16.037540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.123 [2024-07-15 11:55:16.037560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.123 [2024-07-15 11:55:16.045015] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.123 [2024-07-15 11:55:16.045383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.123 [2024-07-15 11:55:16.045404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.123 [2024-07-15 11:55:16.053451] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.123 [2024-07-15 11:55:16.053805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.123 [2024-07-15 11:55:16.053826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.061614] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.061954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.061975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.069227] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.069574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.069594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.077251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.077596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.077617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.084993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.085388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.085409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.092968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.093279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.093299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.100637] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.100955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.100976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.108209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.108563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.108587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.115522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.115824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.115850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.122777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.123038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.123059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.129558] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.129850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.129871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.136910] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.137284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.137304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.143401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.143653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.143675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.150122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.150415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.150436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.157325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.157573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.157594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.164325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.164630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.164650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.171098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.171368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.171390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.177330] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.177617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.177639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.184505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.184775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.184796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.190933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.191269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.191289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.198280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.198552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.198573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.205175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.205443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.205463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.211595] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.211924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.211945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.218872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.219124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.219144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.124 [2024-07-15 11:55:16.225922] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.124 [2024-07-15 11:55:16.226185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.124 [2024-07-15 11:55:16.226209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.384 [2024-07-15 11:55:16.233113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.384 [2024-07-15 11:55:16.233461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.384 [2024-07-15 11:55:16.233482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.384 [2024-07-15 11:55:16.239977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.240284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.240306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.246615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.246905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.246926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.254191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.254560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.254581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.262444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.262809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.262830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.270940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.271317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.271339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.279414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.279686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.279707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.287264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.287608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.287629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.295645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.295965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.295987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.303914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.304292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.304313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.311972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.312338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.312358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.320149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.320521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.320542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.328356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.328734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.328755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.336719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.337039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.337059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.344949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.345274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.345294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.352957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.353321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.353342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.361476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.361855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.361876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.370074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.370386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.370408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.378159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.378576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.378597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.386604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.386924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.386944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.394711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.395071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.395092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.403195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.403558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.403579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.410925] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.411170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.411191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.417550] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.417866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.417887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.424609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.424867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.424888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.430916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.431211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.431235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.438073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.438368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.438389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.444859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.445142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.445164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.451514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.451846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.451867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.458422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.458747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.458768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.465444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.465721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.465742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.472169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.472423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.472444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.479127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.479380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.479400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.385 [2024-07-15 11:55:16.485352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.385 [2024-07-15 11:55:16.485610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.385 [2024-07-15 11:55:16.485631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.646 [2024-07-15 11:55:16.491503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.646 [2024-07-15 11:55:16.491840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.646 [2024-07-15 11:55:16.491861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.646 [2024-07-15 11:55:16.498242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.646 [2024-07-15 11:55:16.498534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.646 [2024-07-15 11:55:16.498555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.646 [2024-07-15 11:55:16.505438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.646 [2024-07-15 11:55:16.505691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.646 [2024-07-15 11:55:16.505712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.646 [2024-07-15 11:55:16.511671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.646 [2024-07-15 11:55:16.511929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.646 [2024-07-15 11:55:16.511949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.646 [2024-07-15 11:55:16.519012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.646 [2024-07-15 11:55:16.519310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.646 [2024-07-15 11:55:16.519330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.646 [2024-07-15 11:55:16.525802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.646 [2024-07-15 11:55:16.526104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.646 [2024-07-15 11:55:16.526124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.646 [2024-07-15 11:55:16.533462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.646 [2024-07-15 11:55:16.533765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.646 [2024-07-15 11:55:16.533785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.646 [2024-07-15 11:55:16.541422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.646 [2024-07-15 11:55:16.541754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.646 [2024-07-15 11:55:16.541775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.646 [2024-07-15 11:55:16.548888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.646 [2024-07-15 11:55:16.549080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.646 [2024-07-15 11:55:16.549101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.646 [2024-07-15 11:55:16.559615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.646 [2024-07-15 11:55:16.560220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.646 [2024-07-15 11:55:16.560242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.646 [2024-07-15 11:55:16.573127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.646 [2024-07-15 11:55:16.573587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.646 [2024-07-15 11:55:16.573608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.646 [2024-07-15 11:55:16.583067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.646 [2024-07-15 11:55:16.583410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.646 [2024-07-15 11:55:16.583432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.646 [2024-07-15 11:55:16.591055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.646 [2024-07-15 11:55:16.591418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.646 [2024-07-15 11:55:16.591438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.646 [2024-07-15 11:55:16.598166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.646 [2024-07-15 11:55:16.598451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.646 [2024-07-15 11:55:16.598472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.646 [2024-07-15 11:55:16.604994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.646 [2024-07-15 11:55:16.605307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.646 [2024-07-15 11:55:16.605327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:48.646 [2024-07-15 11:55:16.611612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.646 [2024-07-15 11:55:16.611925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.646 [2024-07-15 11:55:16.611945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:48.646 [2024-07-15 11:55:16.618850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.646 [2024-07-15 11:55:16.619108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.646 [2024-07-15 11:55:16.619129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:48.646 [2024-07-15 11:55:16.625246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1558a30) with pdu=0x2000190fef90 00:28:48.646 [2024-07-15 11:55:16.625573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:48.646 [2024-07-15 11:55:16.625598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:48.646 00:28:48.646 Latency(us) 00:28:48.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.646 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:48.646 nvme0n1 : 2.00 3877.84 484.73 0.00 0.00 4120.02 2686.98 18559.80 00:28:48.646 =================================================================================================================== 00:28:48.646 Total : 3877.84 484.73 0.00 0.00 4120.02 2686.98 18559.80 00:28:48.646 0 00:28:48.646 11:55:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:48.646 11:55:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:48.646 11:55:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:48.646 | .driver_specific 00:28:48.646 | .nvme_error 00:28:48.646 | .status_code 00:28:48.646 | .command_transient_transport_error' 00:28:48.646 11:55:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:48.906 11:55:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 250 > 0 )) 00:28:48.906 11:55:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2126362 00:28:48.906 11:55:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2126362 ']' 00:28:48.906 11:55:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2126362 00:28:48.906 11:55:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:48.906 11:55:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:48.906 11:55:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2126362 00:28:48.906 11:55:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:48.906 11:55:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:48.906 11:55:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2126362' 00:28:48.906 killing process with pid 2126362 00:28:48.906 11:55:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2126362 00:28:48.906 Received shutdown signal, test time was about 2.000000 seconds 00:28:48.906 00:28:48.906 Latency(us) 00:28:48.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.906 =================================================================================================================== 00:28:48.906 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:48.906 11:55:16 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2126362 00:28:49.166 11:55:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2124197 00:28:49.166 11:55:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2124197 ']' 00:28:49.166 11:55:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2124197 00:28:49.166 11:55:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:28:49.166 11:55:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:49.166 11:55:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2124197 00:28:49.166 11:55:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:49.166 11:55:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:49.166 11:55:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2124197' 00:28:49.166 killing process with pid 2124197 00:28:49.166 11:55:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2124197 00:28:49.166 11:55:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2124197 00:28:49.425 00:28:49.425 real 0m16.815s 00:28:49.425 user 0m31.692s 00:28:49.425 sys 0m4.940s 00:28:49.425 11:55:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:49.425 11:55:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:49.425 ************************************ 00:28:49.425 END TEST nvmf_digest_error 00:28:49.425 ************************************ 00:28:49.425 11:55:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:28:49.425 11:55:17 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:49.425 11:55:17 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:49.425 11:55:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:49.425 11:55:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:49.425 11:55:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:49.425 11:55:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:49.425 11:55:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:49.425 11:55:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:49.425 rmmod nvme_tcp 00:28:49.425 rmmod nvme_fabrics 00:28:49.425 rmmod nvme_keyring 00:28:49.425 11:55:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:49.425 11:55:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:49.426 11:55:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:49.426 11:55:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2124197 ']' 00:28:49.426 11:55:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2124197 00:28:49.426 11:55:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2124197 ']' 00:28:49.426 11:55:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2124197 00:28:49.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2124197) - No such process 00:28:49.426 11:55:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2124197 is not found' 00:28:49.426 Process with pid 2124197 is not found 00:28:49.426 11:55:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:49.426 11:55:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:49.426 11:55:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:49.426 11:55:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:49.426 11:55:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:49.426 11:55:17 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.426 11:55:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:49.426 11:55:17 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.965 11:55:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:51.965 00:28:51.965 real 0m42.390s 00:28:51.965 user 1m5.179s 00:28:51.965 sys 0m14.798s 00:28:51.965 11:55:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:51.965 11:55:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:51.965 ************************************ 00:28:51.965 END TEST nvmf_digest 00:28:51.965 ************************************ 00:28:51.965 11:55:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:51.965 11:55:19 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:28:51.965 11:55:19 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:28:51.965 11:55:19 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:28:51.965 11:55:19 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:51.965 11:55:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:51.965 11:55:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:51.965 11:55:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:51.965 ************************************ 00:28:51.965 START TEST nvmf_bdevperf 00:28:51.965 ************************************ 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:51.965 * Looking for test storage... 00:28:51.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:51.965 11:55:19 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:58.533 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:58.533 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:58.533 Found net devices under 0000:af:00.0: cvl_0_0 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.533 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:58.534 Found net devices under 0000:af:00.1: cvl_0_1 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:58.534 11:55:25 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:58.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:58.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:28:58.534 00:28:58.534 --- 10.0.0.2 ping statistics --- 00:28:58.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.534 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:58.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:58.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:28:58.534 00:28:58.534 --- 10.0.0.1 ping statistics --- 00:28:58.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.534 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2130585 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2130585 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2130585 ']' 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:58.534 11:55:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.534 [2024-07-15 11:55:26.166234] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:28:58.534 [2024-07-15 11:55:26.166282] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.534 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.534 [2024-07-15 11:55:26.240336] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:58.534 [2024-07-15 11:55:26.308468] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.534 [2024-07-15 11:55:26.308511] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.534 [2024-07-15 11:55:26.308520] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.534 [2024-07-15 11:55:26.308528] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.534 [2024-07-15 11:55:26.308535] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.534 [2024-07-15 11:55:26.308646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:58.534 [2024-07-15 11:55:26.308730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:58.534 [2024-07-15 11:55:26.308732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.103 11:55:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:59.103 11:55:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:28:59.103 11:55:26 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:59.103 11:55:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:59.103 11:55:26 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.103 [2024-07-15 11:55:27.020640] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.103 Malloc0 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.103 [2024-07-15 11:55:27.084078] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:59.103 { 00:28:59.103 "params": { 00:28:59.103 "name": "Nvme$subsystem", 00:28:59.103 "trtype": "$TEST_TRANSPORT", 00:28:59.103 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.103 "adrfam": "ipv4", 00:28:59.103 "trsvcid": "$NVMF_PORT", 00:28:59.103 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.103 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.103 "hdgst": ${hdgst:-false}, 00:28:59.103 "ddgst": ${ddgst:-false} 00:28:59.103 }, 00:28:59.103 "method": "bdev_nvme_attach_controller" 00:28:59.103 } 00:28:59.103 EOF 00:28:59.103 )") 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:59.103 11:55:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:59.103 "params": { 00:28:59.103 "name": "Nvme1", 00:28:59.103 "trtype": "tcp", 00:28:59.103 "traddr": "10.0.0.2", 00:28:59.103 "adrfam": "ipv4", 00:28:59.103 "trsvcid": "4420", 00:28:59.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:59.103 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:59.103 "hdgst": false, 00:28:59.103 "ddgst": false 00:28:59.103 }, 00:28:59.103 "method": "bdev_nvme_attach_controller" 00:28:59.103 }' 00:28:59.103 [2024-07-15 11:55:27.134442] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:28:59.103 [2024-07-15 11:55:27.134492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2130761 ] 00:28:59.103 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.103 [2024-07-15 11:55:27.205690] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.362 [2024-07-15 11:55:27.274839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.621 Running I/O for 1 seconds... 00:29:00.563 00:29:00.563 Latency(us) 00:29:00.563 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.563 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:00.563 Verification LBA range: start 0x0 length 0x4000 00:29:00.563 Nvme1n1 : 1.01 11810.76 46.14 0.00 0.00 10798.71 2411.72 14575.21 00:29:00.563 =================================================================================================================== 00:29:00.563 Total : 11810.76 46.14 0.00 0.00 10798.71 2411.72 14575.21 00:29:00.824 11:55:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2131069 00:29:00.824 11:55:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:00.824 11:55:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:00.824 11:55:28 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:00.824 11:55:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:00.824 11:55:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:00.824 11:55:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:00.824 11:55:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:00.824 { 00:29:00.824 "params": { 00:29:00.824 "name": "Nvme$subsystem", 00:29:00.824 "trtype": "$TEST_TRANSPORT", 00:29:00.824 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:00.824 "adrfam": "ipv4", 00:29:00.824 "trsvcid": "$NVMF_PORT", 00:29:00.824 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:00.824 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:00.824 "hdgst": ${hdgst:-false}, 00:29:00.824 "ddgst": ${ddgst:-false} 00:29:00.824 }, 00:29:00.824 "method": "bdev_nvme_attach_controller" 00:29:00.824 } 00:29:00.824 EOF 00:29:00.824 )") 00:29:00.824 11:55:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:00.824 11:55:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:00.824 11:55:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:00.824 11:55:28 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:00.824 "params": { 00:29:00.824 "name": "Nvme1", 00:29:00.824 "trtype": "tcp", 00:29:00.824 "traddr": "10.0.0.2", 00:29:00.824 "adrfam": "ipv4", 00:29:00.824 "trsvcid": "4420", 00:29:00.824 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:00.824 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:00.824 "hdgst": false, 00:29:00.824 "ddgst": false 00:29:00.824 }, 00:29:00.824 "method": "bdev_nvme_attach_controller" 00:29:00.824 }' 00:29:00.824 [2024-07-15 11:55:28.835520] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:29:00.824 [2024-07-15 11:55:28.835570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2131069 ] 00:29:00.824 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.824 [2024-07-15 11:55:28.924364] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.084 [2024-07-15 11:55:28.990686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.343 Running I/O for 15 seconds... 00:29:03.878 11:55:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2130585 00:29:03.878 11:55:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:03.878 [2024-07-15 11:55:31.812787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.878 [2024-07-15 11:55:31.812829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.878 [2024-07-15 11:55:31.812856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:108448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.878 [2024-07-15 11:55:31.812867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.878 [2024-07-15 11:55:31.812879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:108456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.878 [2024-07-15 11:55:31.812890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.878 [2024-07-15 11:55:31.812901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:108464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.878 [2024-07-15 11:55:31.812911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.878 [2024-07-15 11:55:31.812928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.878 [2024-07-15 11:55:31.812939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.878 [2024-07-15 11:55:31.812950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:108480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.878 [2024-07-15 11:55:31.812961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.878 [2024-07-15 11:55:31.812971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:108488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.878 [2024-07-15 11:55:31.812980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.878 [2024-07-15 11:55:31.812992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:108496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.878 [2024-07-15 11:55:31.813003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.878 [2024-07-15 11:55:31.813014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:108504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.878 [2024-07-15 11:55:31.813024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.878 [2024-07-15 11:55:31.813035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:108512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.878 [2024-07-15 11:55:31.813046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.878 [2024-07-15 11:55:31.813057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:108520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.878 [2024-07-15 11:55:31.813069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.878 [2024-07-15 11:55:31.813082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:108528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:108536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:108544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:108552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:108560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:108568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:108584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:108608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:108616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:108624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:108632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:108640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:108648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:108656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:108664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:108688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:108696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:108704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:108712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:108720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:108728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:108736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:108760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:108776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:108784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.879 [2024-07-15 11:55:31.813829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.879 [2024-07-15 11:55:31.813845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.879 [2024-07-15 11:55:31.813855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.813866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.880 [2024-07-15 11:55:31.813875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.813885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:107992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.880 [2024-07-15 11:55:31.813895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.813907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.880 [2024-07-15 11:55:31.813916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.813927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.880 [2024-07-15 11:55:31.813936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.813947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:108016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.880 [2024-07-15 11:55:31.813957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.813967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:108024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.880 [2024-07-15 11:55:31.813977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.813988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.880 [2024-07-15 11:55:31.813997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:108040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.880 [2024-07-15 11:55:31.814017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:108048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.880 [2024-07-15 11:55:31.814036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.880 [2024-07-15 11:55:31.814058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:108792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.880 [2024-07-15 11:55:31.814077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:108800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.880 [2024-07-15 11:55:31.814097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:108808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.880 [2024-07-15 11:55:31.814117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:108816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.880 [2024-07-15 11:55:31.814137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:108824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.880 [2024-07-15 11:55:31.814156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.880 [2024-07-15 11:55:31.814175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:108840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.880 [2024-07-15 11:55:31.814195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:108848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.880 [2024-07-15 11:55:31.814215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:108856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.880 [2024-07-15 11:55:31.814235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:108864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.880 [2024-07-15 11:55:31.814255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:108872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.880 [2024-07-15 11:55:31.814275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:108880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.880 [2024-07-15 11:55:31.814295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.880 [2024-07-15 11:55:31.814317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:108896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.880 [2024-07-15 11:55:31.814336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:108904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.880 [2024-07-15 11:55:31.814357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:108912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.880 [2024-07-15 11:55:31.814377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:108920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.880 [2024-07-15 11:55:31.814396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:108928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.880 [2024-07-15 11:55:31.814415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.880 [2024-07-15 11:55:31.814436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:108064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.880 [2024-07-15 11:55:31.814455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.880 [2024-07-15 11:55:31.814475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.880 [2024-07-15 11:55:31.814495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:108088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.880 [2024-07-15 11:55:31.814514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.880 [2024-07-15 11:55:31.814534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.880 [2024-07-15 11:55:31.814545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:108104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.814556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.814566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:108112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.814575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.814587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.814597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.814608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.814618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.814628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.814637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.814648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.814658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.814669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.814678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.814688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:108160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.814698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.814709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.814719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.814729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:108176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.814738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.814749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.814758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.814768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:108192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.814779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.814790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.814799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.814811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:108208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.814820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.814835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.814845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.814856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.814866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.814876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.814885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.814895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.814905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.814917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:108248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.814926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.814937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.814945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.814956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:108264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.814966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.814977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:108272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.814986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.814996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.815006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.815016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.815026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.815037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.815045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.815056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.815069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.815079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:108312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.815089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.815100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.815109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.815119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:108944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.881 [2024-07-15 11:55:31.815128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.815139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:108952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.881 [2024-07-15 11:55:31.815148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.815159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:108960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.881 [2024-07-15 11:55:31.815168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.815179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:108968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.881 [2024-07-15 11:55:31.815188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.815199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.881 [2024-07-15 11:55:31.815209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.815220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:108984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.881 [2024-07-15 11:55:31.815231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.815243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.815252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.815263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.815273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.881 [2024-07-15 11:55:31.815284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.881 [2024-07-15 11:55:31.815293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.882 [2024-07-15 11:55:31.815304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.882 [2024-07-15 11:55:31.815312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.882 [2024-07-15 11:55:31.815324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.882 [2024-07-15 11:55:31.815334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.882 [2024-07-15 11:55:31.815345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:108368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.882 [2024-07-15 11:55:31.815354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.882 [2024-07-15 11:55:31.815364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:108376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.882 [2024-07-15 11:55:31.815373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.882 [2024-07-15 11:55:31.815384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.882 [2024-07-15 11:55:31.815393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.882 [2024-07-15 11:55:31.815404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.882 [2024-07-15 11:55:31.815413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.882 [2024-07-15 11:55:31.815423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.882 [2024-07-15 11:55:31.815432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.882 [2024-07-15 11:55:31.815443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.882 [2024-07-15 11:55:31.815452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.882 [2024-07-15 11:55:31.815464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.882 [2024-07-15 11:55:31.815474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.882 [2024-07-15 11:55:31.815484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.882 [2024-07-15 11:55:31.815493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.882 [2024-07-15 11:55:31.815504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.882 [2024-07-15 11:55:31.815513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.882 [2024-07-15 11:55:31.815524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2477830 is same with the state(5) to be set 00:29:03.882 [2024-07-15 11:55:31.815535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:03.882 [2024-07-15 11:55:31.815542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:03.882 [2024-07-15 11:55:31.815550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:108440 len:8 PRP1 0x0 PRP2 0x0 00:29:03.882 [2024-07-15 11:55:31.815560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.882 [2024-07-15 11:55:31.815607] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2477830 was disconnected and freed. reset controller. 00:29:03.882 [2024-07-15 11:55:31.815655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.882 [2024-07-15 11:55:31.815666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.882 [2024-07-15 11:55:31.815677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.882 [2024-07-15 11:55:31.815686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.882 [2024-07-15 11:55:31.815696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.882 [2024-07-15 11:55:31.815706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.882 [2024-07-15 11:55:31.815716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:03.882 [2024-07-15 11:55:31.815725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:03.882 [2024-07-15 11:55:31.815733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:03.882 [2024-07-15 11:55:31.818786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.882 [2024-07-15 11:55:31.818814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:03.882 [2024-07-15 11:55:31.819429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.882 [2024-07-15 11:55:31.819485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:03.882 [2024-07-15 11:55:31.819519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:03.882 [2024-07-15 11:55:31.820126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:03.882 [2024-07-15 11:55:31.820667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.882 [2024-07-15 11:55:31.820678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.882 [2024-07-15 11:55:31.820689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.882 [2024-07-15 11:55:31.823319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.882 [2024-07-15 11:55:31.831820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.882 [2024-07-15 11:55:31.832297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.882 [2024-07-15 11:55:31.832316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:03.882 [2024-07-15 11:55:31.832327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:03.882 [2024-07-15 11:55:31.832498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:03.882 [2024-07-15 11:55:31.832671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.882 [2024-07-15 11:55:31.832683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.882 [2024-07-15 11:55:31.832693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.882 [2024-07-15 11:55:31.835220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.882 [2024-07-15 11:55:31.844607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.882 [2024-07-15 11:55:31.845048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.882 [2024-07-15 11:55:31.845109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:03.882 [2024-07-15 11:55:31.845143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:03.882 [2024-07-15 11:55:31.845666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:03.882 [2024-07-15 11:55:31.845842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.882 [2024-07-15 11:55:31.845854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.882 [2024-07-15 11:55:31.845864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.882 [2024-07-15 11:55:31.848346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.882 [2024-07-15 11:55:31.857293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.882 [2024-07-15 11:55:31.857709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.882 [2024-07-15 11:55:31.857765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:03.882 [2024-07-15 11:55:31.857799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:03.882 [2024-07-15 11:55:31.858405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:03.882 [2024-07-15 11:55:31.858619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.882 [2024-07-15 11:55:31.858631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.882 [2024-07-15 11:55:31.858641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.882 [2024-07-15 11:55:31.862392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.882 [2024-07-15 11:55:31.870554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.882 [2024-07-15 11:55:31.870924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.882 [2024-07-15 11:55:31.870944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:03.882 [2024-07-15 11:55:31.870954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:03.882 [2024-07-15 11:55:31.871120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:03.883 [2024-07-15 11:55:31.871286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.883 [2024-07-15 11:55:31.871298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.883 [2024-07-15 11:55:31.871308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.883 [2024-07-15 11:55:31.873811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.883 [2024-07-15 11:55:31.883400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.883 [2024-07-15 11:55:31.883771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.883 [2024-07-15 11:55:31.883788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:03.883 [2024-07-15 11:55:31.883798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:03.883 [2024-07-15 11:55:31.883966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:03.883 [2024-07-15 11:55:31.884124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.883 [2024-07-15 11:55:31.884135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.883 [2024-07-15 11:55:31.884144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.883 [2024-07-15 11:55:31.886697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.883 [2024-07-15 11:55:31.896166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.883 [2024-07-15 11:55:31.896647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.883 [2024-07-15 11:55:31.896666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:03.883 [2024-07-15 11:55:31.896675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:03.883 [2024-07-15 11:55:31.896840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:03.883 [2024-07-15 11:55:31.897020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.883 [2024-07-15 11:55:31.897032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.883 [2024-07-15 11:55:31.897041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.883 [2024-07-15 11:55:31.899560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.883 [2024-07-15 11:55:31.908926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.883 [2024-07-15 11:55:31.909373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.883 [2024-07-15 11:55:31.909391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:03.883 [2024-07-15 11:55:31.909401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:03.883 [2024-07-15 11:55:31.909567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:03.883 [2024-07-15 11:55:31.909732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.883 [2024-07-15 11:55:31.909744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.883 [2024-07-15 11:55:31.909753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.883 [2024-07-15 11:55:31.912293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.883 [2024-07-15 11:55:31.921721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.883 [2024-07-15 11:55:31.922227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.883 [2024-07-15 11:55:31.922279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:03.883 [2024-07-15 11:55:31.922312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:03.883 [2024-07-15 11:55:31.922715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:03.883 [2024-07-15 11:55:31.922881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.883 [2024-07-15 11:55:31.922892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.883 [2024-07-15 11:55:31.922904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.883 [2024-07-15 11:55:31.925481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.883 [2024-07-15 11:55:31.934569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.883 [2024-07-15 11:55:31.934960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.883 [2024-07-15 11:55:31.935014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:03.883 [2024-07-15 11:55:31.935047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:03.883 [2024-07-15 11:55:31.935641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:03.883 [2024-07-15 11:55:31.936013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.883 [2024-07-15 11:55:31.936024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.884 [2024-07-15 11:55:31.936033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.884 [2024-07-15 11:55:31.938578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.884 [2024-07-15 11:55:31.947355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.884 [2024-07-15 11:55:31.947780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.884 [2024-07-15 11:55:31.947798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:03.884 [2024-07-15 11:55:31.947808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:03.884 [2024-07-15 11:55:31.947990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:03.884 [2024-07-15 11:55:31.948149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.884 [2024-07-15 11:55:31.948159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.884 [2024-07-15 11:55:31.948168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.884 [2024-07-15 11:55:31.950626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.884 [2024-07-15 11:55:31.960153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.884 [2024-07-15 11:55:31.960634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.884 [2024-07-15 11:55:31.960652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:03.884 [2024-07-15 11:55:31.960661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:03.884 [2024-07-15 11:55:31.960818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:03.884 [2024-07-15 11:55:31.960981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.884 [2024-07-15 11:55:31.960992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.884 [2024-07-15 11:55:31.961001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.884 [2024-07-15 11:55:31.963514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:03.884 [2024-07-15 11:55:31.972875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:03.884 [2024-07-15 11:55:31.973312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.884 [2024-07-15 11:55:31.973362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:03.884 [2024-07-15 11:55:31.973396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:03.884 [2024-07-15 11:55:31.973969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:03.884 [2024-07-15 11:55:31.974138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:03.884 [2024-07-15 11:55:31.974151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:03.884 [2024-07-15 11:55:31.974160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:03.884 [2024-07-15 11:55:31.976803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.144 [2024-07-15 11:55:31.985803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.144 [2024-07-15 11:55:31.986320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.144 [2024-07-15 11:55:31.986339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.144 [2024-07-15 11:55:31.986349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.144 [2024-07-15 11:55:31.986506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.144 [2024-07-15 11:55:31.986664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.144 [2024-07-15 11:55:31.986674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.144 [2024-07-15 11:55:31.986683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.144 [2024-07-15 11:55:31.989264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.144 [2024-07-15 11:55:31.998652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.144 [2024-07-15 11:55:31.999116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.144 [2024-07-15 11:55:31.999169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.144 [2024-07-15 11:55:31.999204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.144 [2024-07-15 11:55:31.999804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.144 [2024-07-15 11:55:32.000054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.144 [2024-07-15 11:55:32.000071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.144 [2024-07-15 11:55:32.000084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.144 [2024-07-15 11:55:32.003823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.144 [2024-07-15 11:55:32.011577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.144 [2024-07-15 11:55:32.012086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.144 [2024-07-15 11:55:32.012104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.144 [2024-07-15 11:55:32.012115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.144 [2024-07-15 11:55:32.012272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.144 [2024-07-15 11:55:32.012433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.144 [2024-07-15 11:55:32.012444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.144 [2024-07-15 11:55:32.012452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.144 [2024-07-15 11:55:32.014949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.144 [2024-07-15 11:55:32.024298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.144 [2024-07-15 11:55:32.024667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.144 [2024-07-15 11:55:32.024686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.144 [2024-07-15 11:55:32.024696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.144 [2024-07-15 11:55:32.024867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.144 [2024-07-15 11:55:32.025034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.144 [2024-07-15 11:55:32.025045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.144 [2024-07-15 11:55:32.025054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.144 [2024-07-15 11:55:32.027571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.144 [2024-07-15 11:55:32.037069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.144 [2024-07-15 11:55:32.037549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.144 [2024-07-15 11:55:32.037567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.144 [2024-07-15 11:55:32.037576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.144 [2024-07-15 11:55:32.037733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.144 [2024-07-15 11:55:32.037912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.144 [2024-07-15 11:55:32.037924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.144 [2024-07-15 11:55:32.037933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.144 [2024-07-15 11:55:32.040504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.144 [2024-07-15 11:55:32.049919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.144 [2024-07-15 11:55:32.050421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.144 [2024-07-15 11:55:32.050473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.144 [2024-07-15 11:55:32.050506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.144 [2024-07-15 11:55:32.050961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.144 [2024-07-15 11:55:32.051121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.144 [2024-07-15 11:55:32.051132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.144 [2024-07-15 11:55:32.051140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.144 [2024-07-15 11:55:32.053664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.144 [2024-07-15 11:55:32.062753] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.144 [2024-07-15 11:55:32.063214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.144 [2024-07-15 11:55:32.063265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.144 [2024-07-15 11:55:32.063299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.144 [2024-07-15 11:55:32.063908] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.144 [2024-07-15 11:55:32.064076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.145 [2024-07-15 11:55:32.064088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.145 [2024-07-15 11:55:32.064097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.145 [2024-07-15 11:55:32.066773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.145 [2024-07-15 11:55:32.075639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.145 [2024-07-15 11:55:32.076087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.145 [2024-07-15 11:55:32.076106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.145 [2024-07-15 11:55:32.076116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.145 [2024-07-15 11:55:32.076286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.145 [2024-07-15 11:55:32.076457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.145 [2024-07-15 11:55:32.076469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.145 [2024-07-15 11:55:32.076478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.145 [2024-07-15 11:55:32.079149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.145 [2024-07-15 11:55:32.088618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.145 [2024-07-15 11:55:32.089158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.145 [2024-07-15 11:55:32.089177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.145 [2024-07-15 11:55:32.089188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.145 [2024-07-15 11:55:32.089359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.145 [2024-07-15 11:55:32.089529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.145 [2024-07-15 11:55:32.089541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.145 [2024-07-15 11:55:32.089550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.145 [2024-07-15 11:55:32.092226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.145 [2024-07-15 11:55:32.101533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.145 [2024-07-15 11:55:32.102042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.145 [2024-07-15 11:55:32.102061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.145 [2024-07-15 11:55:32.102074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.145 [2024-07-15 11:55:32.102245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.145 [2024-07-15 11:55:32.102415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.145 [2024-07-15 11:55:32.102426] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.145 [2024-07-15 11:55:32.102436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.145 [2024-07-15 11:55:32.105116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.145 [2024-07-15 11:55:32.114414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.145 [2024-07-15 11:55:32.114937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.145 [2024-07-15 11:55:32.114957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.145 [2024-07-15 11:55:32.114967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.145 [2024-07-15 11:55:32.115138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.145 [2024-07-15 11:55:32.115308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.145 [2024-07-15 11:55:32.115320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.145 [2024-07-15 11:55:32.115329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.145 [2024-07-15 11:55:32.118001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.145 [2024-07-15 11:55:32.127316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.145 [2024-07-15 11:55:32.127843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.145 [2024-07-15 11:55:32.127863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.145 [2024-07-15 11:55:32.127873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.145 [2024-07-15 11:55:32.128044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.145 [2024-07-15 11:55:32.128214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.145 [2024-07-15 11:55:32.128226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.145 [2024-07-15 11:55:32.128235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.145 [2024-07-15 11:55:32.130909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.145 [2024-07-15 11:55:32.140192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.145 [2024-07-15 11:55:32.140692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.145 [2024-07-15 11:55:32.140710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.145 [2024-07-15 11:55:32.140721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.145 [2024-07-15 11:55:32.140903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.145 [2024-07-15 11:55:32.141075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.145 [2024-07-15 11:55:32.141090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.145 [2024-07-15 11:55:32.141101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.145 [2024-07-15 11:55:32.143765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.145 [2024-07-15 11:55:32.153219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.145 [2024-07-15 11:55:32.153734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.145 [2024-07-15 11:55:32.153785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.145 [2024-07-15 11:55:32.153818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.145 [2024-07-15 11:55:32.154423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.145 [2024-07-15 11:55:32.154614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.145 [2024-07-15 11:55:32.154625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.145 [2024-07-15 11:55:32.154635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.145 [2024-07-15 11:55:32.157308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.145 [2024-07-15 11:55:32.166150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.145 [2024-07-15 11:55:32.166689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.145 [2024-07-15 11:55:32.166741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.145 [2024-07-15 11:55:32.166774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.145 [2024-07-15 11:55:32.167382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.145 [2024-07-15 11:55:32.167735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.145 [2024-07-15 11:55:32.167746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.145 [2024-07-15 11:55:32.167756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.145 [2024-07-15 11:55:32.170375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.145 [2024-07-15 11:55:32.178899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.145 [2024-07-15 11:55:32.179400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.145 [2024-07-15 11:55:32.179454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.145 [2024-07-15 11:55:32.179487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.145 [2024-07-15 11:55:32.180093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.145 [2024-07-15 11:55:32.180614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.145 [2024-07-15 11:55:32.180626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.145 [2024-07-15 11:55:32.180636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.145 [2024-07-15 11:55:32.183119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.145 [2024-07-15 11:55:32.191582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.145 [2024-07-15 11:55:32.192050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.145 [2024-07-15 11:55:32.192103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.145 [2024-07-15 11:55:32.192137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.145 [2024-07-15 11:55:32.192521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.145 [2024-07-15 11:55:32.192679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.145 [2024-07-15 11:55:32.192690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.145 [2024-07-15 11:55:32.192699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.145 [2024-07-15 11:55:32.195245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.145 [2024-07-15 11:55:32.204265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.145 [2024-07-15 11:55:32.204788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.145 [2024-07-15 11:55:32.204806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.145 [2024-07-15 11:55:32.204815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.145 [2024-07-15 11:55:32.204999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.146 [2024-07-15 11:55:32.205166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.146 [2024-07-15 11:55:32.205177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.146 [2024-07-15 11:55:32.205186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.146 [2024-07-15 11:55:32.207714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.146 [2024-07-15 11:55:32.216934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.146 [2024-07-15 11:55:32.217443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.146 [2024-07-15 11:55:32.217461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.146 [2024-07-15 11:55:32.217470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.146 [2024-07-15 11:55:32.217627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.146 [2024-07-15 11:55:32.217783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.146 [2024-07-15 11:55:32.217794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.146 [2024-07-15 11:55:32.217803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.146 [2024-07-15 11:55:32.220352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.146 [2024-07-15 11:55:32.229705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.146 [2024-07-15 11:55:32.230208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.146 [2024-07-15 11:55:32.230225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.146 [2024-07-15 11:55:32.230235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.146 [2024-07-15 11:55:32.230396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.146 [2024-07-15 11:55:32.230553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.146 [2024-07-15 11:55:32.230564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.146 [2024-07-15 11:55:32.230572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.146 [2024-07-15 11:55:32.233116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.146 [2024-07-15 11:55:32.242395] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.146 [2024-07-15 11:55:32.242852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.146 [2024-07-15 11:55:32.242906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.146 [2024-07-15 11:55:32.242939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.146 [2024-07-15 11:55:32.243483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.146 [2024-07-15 11:55:32.243654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.146 [2024-07-15 11:55:32.243666] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.146 [2024-07-15 11:55:32.243676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.146 [2024-07-15 11:55:32.246355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.406 [2024-07-15 11:55:32.255405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.406 [2024-07-15 11:55:32.255924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.406 [2024-07-15 11:55:32.255942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.406 [2024-07-15 11:55:32.255951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.406 [2024-07-15 11:55:32.256109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.406 [2024-07-15 11:55:32.256267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.406 [2024-07-15 11:55:32.256277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.406 [2024-07-15 11:55:32.256286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.406 [2024-07-15 11:55:32.258868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.406 [2024-07-15 11:55:32.268105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.406 [2024-07-15 11:55:32.268522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.406 [2024-07-15 11:55:32.268575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.406 [2024-07-15 11:55:32.268608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.406 [2024-07-15 11:55:32.269214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.406 [2024-07-15 11:55:32.269659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.406 [2024-07-15 11:55:32.269671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.406 [2024-07-15 11:55:32.269685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.406 [2024-07-15 11:55:32.272173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.406 [2024-07-15 11:55:32.280835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.406 [2024-07-15 11:55:32.281340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.406 [2024-07-15 11:55:32.281357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.406 [2024-07-15 11:55:32.281367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.406 [2024-07-15 11:55:32.281525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.406 [2024-07-15 11:55:32.281682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.406 [2024-07-15 11:55:32.281692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.406 [2024-07-15 11:55:32.281701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.406 [2024-07-15 11:55:32.284249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.406 [2024-07-15 11:55:32.293565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.406 [2024-07-15 11:55:32.294046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.406 [2024-07-15 11:55:32.294065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.406 [2024-07-15 11:55:32.294075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.406 [2024-07-15 11:55:32.294241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.406 [2024-07-15 11:55:32.294407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.406 [2024-07-15 11:55:32.294419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.406 [2024-07-15 11:55:32.294429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.406 [2024-07-15 11:55:32.297147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.406 [2024-07-15 11:55:32.306383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.406 [2024-07-15 11:55:32.306770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.406 [2024-07-15 11:55:32.306824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.406 [2024-07-15 11:55:32.306875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.407 [2024-07-15 11:55:32.307455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.407 [2024-07-15 11:55:32.307614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.407 [2024-07-15 11:55:32.307625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.407 [2024-07-15 11:55:32.307635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.407 [2024-07-15 11:55:32.310127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.407 [2024-07-15 11:55:32.319215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.407 [2024-07-15 11:55:32.319700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.407 [2024-07-15 11:55:32.319751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.407 [2024-07-15 11:55:32.319784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.407 [2024-07-15 11:55:32.320397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.407 [2024-07-15 11:55:32.320819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.407 [2024-07-15 11:55:32.320830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.407 [2024-07-15 11:55:32.320845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.407 [2024-07-15 11:55:32.323491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.407 [2024-07-15 11:55:32.332081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.407 [2024-07-15 11:55:32.332537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.407 [2024-07-15 11:55:32.332588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.407 [2024-07-15 11:55:32.332621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.407 [2024-07-15 11:55:32.333102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.407 [2024-07-15 11:55:32.333261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.407 [2024-07-15 11:55:32.333272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.407 [2024-07-15 11:55:32.333281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.407 [2024-07-15 11:55:32.335818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.407 [2024-07-15 11:55:32.344861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.407 [2024-07-15 11:55:32.345350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.407 [2024-07-15 11:55:32.345367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.407 [2024-07-15 11:55:32.345377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.407 [2024-07-15 11:55:32.345533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.407 [2024-07-15 11:55:32.345690] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.407 [2024-07-15 11:55:32.345701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.407 [2024-07-15 11:55:32.345709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.407 [2024-07-15 11:55:32.348257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.407 [2024-07-15 11:55:32.357600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.407 [2024-07-15 11:55:32.358100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.407 [2024-07-15 11:55:32.358152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.407 [2024-07-15 11:55:32.358185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.407 [2024-07-15 11:55:32.358656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.407 [2024-07-15 11:55:32.358814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.407 [2024-07-15 11:55:32.358825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.407 [2024-07-15 11:55:32.358839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.407 [2024-07-15 11:55:32.361384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.407 [2024-07-15 11:55:32.370372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.407 [2024-07-15 11:55:32.370891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.407 [2024-07-15 11:55:32.370943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.407 [2024-07-15 11:55:32.370976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.407 [2024-07-15 11:55:32.371567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.407 [2024-07-15 11:55:32.371995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.407 [2024-07-15 11:55:32.372007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.407 [2024-07-15 11:55:32.372016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.407 [2024-07-15 11:55:32.374534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.407 [2024-07-15 11:55:32.383070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.407 [2024-07-15 11:55:32.383579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.407 [2024-07-15 11:55:32.383631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.407 [2024-07-15 11:55:32.383664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.407 [2024-07-15 11:55:32.384093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.407 [2024-07-15 11:55:32.384332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.407 [2024-07-15 11:55:32.384348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.407 [2024-07-15 11:55:32.384360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.407 [2024-07-15 11:55:32.388096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.407 [2024-07-15 11:55:32.396299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.407 [2024-07-15 11:55:32.396808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.407 [2024-07-15 11:55:32.396826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.407 [2024-07-15 11:55:32.396840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.407 [2024-07-15 11:55:32.397021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.407 [2024-07-15 11:55:32.397187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.407 [2024-07-15 11:55:32.397198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.407 [2024-07-15 11:55:32.397211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.407 [2024-07-15 11:55:32.399718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.407 [2024-07-15 11:55:32.409029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.407 [2024-07-15 11:55:32.409487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.407 [2024-07-15 11:55:32.409539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.407 [2024-07-15 11:55:32.409572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.407 [2024-07-15 11:55:32.410051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.407 [2024-07-15 11:55:32.410219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.407 [2024-07-15 11:55:32.410230] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.407 [2024-07-15 11:55:32.410239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.407 [2024-07-15 11:55:32.412795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.407 [2024-07-15 11:55:32.421740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.407 [2024-07-15 11:55:32.422196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.407 [2024-07-15 11:55:32.422249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.407 [2024-07-15 11:55:32.422283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.407 [2024-07-15 11:55:32.422795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.407 [2024-07-15 11:55:32.422979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.407 [2024-07-15 11:55:32.422990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.407 [2024-07-15 11:55:32.423000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.407 [2024-07-15 11:55:32.425573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.407 [2024-07-15 11:55:32.434463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.407 [2024-07-15 11:55:32.434979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.407 [2024-07-15 11:55:32.435031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.407 [2024-07-15 11:55:32.435065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.407 [2024-07-15 11:55:32.435656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.407 [2024-07-15 11:55:32.435914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.407 [2024-07-15 11:55:32.435925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.407 [2024-07-15 11:55:32.435934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.408 [2024-07-15 11:55:32.438452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.408 [2024-07-15 11:55:32.447250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.408 [2024-07-15 11:55:32.447756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.408 [2024-07-15 11:55:32.447777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.408 [2024-07-15 11:55:32.447787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.408 [2024-07-15 11:55:32.447968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.408 [2024-07-15 11:55:32.448133] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.408 [2024-07-15 11:55:32.448144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.408 [2024-07-15 11:55:32.448153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.408 [2024-07-15 11:55:32.450667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.408 [2024-07-15 11:55:32.460030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.408 [2024-07-15 11:55:32.460527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.408 [2024-07-15 11:55:32.460578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.408 [2024-07-15 11:55:32.460611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.408 [2024-07-15 11:55:32.461004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.408 [2024-07-15 11:55:32.461171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.408 [2024-07-15 11:55:32.461183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.408 [2024-07-15 11:55:32.461192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.408 [2024-07-15 11:55:32.463704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.408 [2024-07-15 11:55:32.472776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.408 [2024-07-15 11:55:32.473302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.408 [2024-07-15 11:55:32.473354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.408 [2024-07-15 11:55:32.473387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.408 [2024-07-15 11:55:32.473992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.408 [2024-07-15 11:55:32.474427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.408 [2024-07-15 11:55:32.474438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.408 [2024-07-15 11:55:32.474447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.408 [2024-07-15 11:55:32.476907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.408 [2024-07-15 11:55:32.485458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.408 [2024-07-15 11:55:32.485977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.408 [2024-07-15 11:55:32.486029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.408 [2024-07-15 11:55:32.486062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.408 [2024-07-15 11:55:32.486654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.408 [2024-07-15 11:55:32.487236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.408 [2024-07-15 11:55:32.487248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.408 [2024-07-15 11:55:32.487257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.408 [2024-07-15 11:55:32.489767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.408 [2024-07-15 11:55:32.498121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.408 [2024-07-15 11:55:32.498627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.408 [2024-07-15 11:55:32.498679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.408 [2024-07-15 11:55:32.498712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.408 [2024-07-15 11:55:32.499273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.408 [2024-07-15 11:55:32.499440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.408 [2024-07-15 11:55:32.499451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.408 [2024-07-15 11:55:32.499461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.408 [2024-07-15 11:55:32.502043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.669 [2024-07-15 11:55:32.511142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.669 [2024-07-15 11:55:32.511664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.669 [2024-07-15 11:55:32.511704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.669 [2024-07-15 11:55:32.511738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.669 [2024-07-15 11:55:32.512295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.669 [2024-07-15 11:55:32.512461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.669 [2024-07-15 11:55:32.512472] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.669 [2024-07-15 11:55:32.512482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.669 [2024-07-15 11:55:32.515057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.669 [2024-07-15 11:55:32.523920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.669 [2024-07-15 11:55:32.524436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.669 [2024-07-15 11:55:32.524488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.669 [2024-07-15 11:55:32.524522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.669 [2024-07-15 11:55:32.525130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.669 [2024-07-15 11:55:32.525434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.669 [2024-07-15 11:55:32.525445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.669 [2024-07-15 11:55:32.525455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.669 [2024-07-15 11:55:32.529050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.669 [2024-07-15 11:55:32.537561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.669 [2024-07-15 11:55:32.537989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.669 [2024-07-15 11:55:32.538042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.669 [2024-07-15 11:55:32.538075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.669 [2024-07-15 11:55:32.538610] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.669 [2024-07-15 11:55:32.538768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.669 [2024-07-15 11:55:32.538780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.669 [2024-07-15 11:55:32.538788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.669 [2024-07-15 11:55:32.541283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.669 [2024-07-15 11:55:32.550343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.669 [2024-07-15 11:55:32.550863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.669 [2024-07-15 11:55:32.550914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.669 [2024-07-15 11:55:32.550946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.669 [2024-07-15 11:55:32.551398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.669 [2024-07-15 11:55:32.551557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.669 [2024-07-15 11:55:32.551568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.669 [2024-07-15 11:55:32.551578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.669 [2024-07-15 11:55:32.554188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.669 [2024-07-15 11:55:32.562995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.669 [2024-07-15 11:55:32.563514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.669 [2024-07-15 11:55:32.563567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.669 [2024-07-15 11:55:32.563601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.669 [2024-07-15 11:55:32.564207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.669 [2024-07-15 11:55:32.564662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.669 [2024-07-15 11:55:32.564674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.669 [2024-07-15 11:55:32.564683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.669 [2024-07-15 11:55:32.567173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.669 [2024-07-15 11:55:32.575828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.669 [2024-07-15 11:55:32.576408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.669 [2024-07-15 11:55:32.576460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.669 [2024-07-15 11:55:32.576502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.669 [2024-07-15 11:55:32.577111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.669 [2024-07-15 11:55:32.577527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.669 [2024-07-15 11:55:32.577538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.669 [2024-07-15 11:55:32.577564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.669 [2024-07-15 11:55:32.580266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.669 [2024-07-15 11:55:32.588781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.669 [2024-07-15 11:55:32.589283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.669 [2024-07-15 11:55:32.589335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.669 [2024-07-15 11:55:32.589369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.669 [2024-07-15 11:55:32.589972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.669 [2024-07-15 11:55:32.590508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.669 [2024-07-15 11:55:32.590520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.669 [2024-07-15 11:55:32.590529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.669 [2024-07-15 11:55:32.593030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.669 [2024-07-15 11:55:32.601508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.669 [2024-07-15 11:55:32.602038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.669 [2024-07-15 11:55:32.602056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.669 [2024-07-15 11:55:32.602066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.669 [2024-07-15 11:55:32.602223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.669 [2024-07-15 11:55:32.602381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.669 [2024-07-15 11:55:32.602392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.669 [2024-07-15 11:55:32.602401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.669 [2024-07-15 11:55:32.604869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.669 [2024-07-15 11:55:32.614334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.669 [2024-07-15 11:55:32.614819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.669 [2024-07-15 11:55:32.614843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.669 [2024-07-15 11:55:32.614853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.669 [2024-07-15 11:55:32.615011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.669 [2024-07-15 11:55:32.615168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.669 [2024-07-15 11:55:32.615182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.669 [2024-07-15 11:55:32.615191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.669 [2024-07-15 11:55:32.617729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.669 [2024-07-15 11:55:32.627177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.669 [2024-07-15 11:55:32.627689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.669 [2024-07-15 11:55:32.627740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.669 [2024-07-15 11:55:32.627773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.669 [2024-07-15 11:55:32.628380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.669 [2024-07-15 11:55:32.628700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.669 [2024-07-15 11:55:32.628711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.669 [2024-07-15 11:55:32.628719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.669 [2024-07-15 11:55:32.631212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.669 [2024-07-15 11:55:32.639853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.669 [2024-07-15 11:55:32.640285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.669 [2024-07-15 11:55:32.640303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.669 [2024-07-15 11:55:32.640313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.669 [2024-07-15 11:55:32.640470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.669 [2024-07-15 11:55:32.640628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.669 [2024-07-15 11:55:32.640638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.669 [2024-07-15 11:55:32.640647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.669 [2024-07-15 11:55:32.643204] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.669 [2024-07-15 11:55:32.652633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.669 [2024-07-15 11:55:32.653078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.669 [2024-07-15 11:55:32.653131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.669 [2024-07-15 11:55:32.653165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.669 [2024-07-15 11:55:32.653661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.669 [2024-07-15 11:55:32.653818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.669 [2024-07-15 11:55:32.653828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.669 [2024-07-15 11:55:32.653842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.669 [2024-07-15 11:55:32.656384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.669 [2024-07-15 11:55:32.665327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.669 [2024-07-15 11:55:32.665855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.669 [2024-07-15 11:55:32.665906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.669 [2024-07-15 11:55:32.665938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.669 [2024-07-15 11:55:32.666398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.669 [2024-07-15 11:55:32.666556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.669 [2024-07-15 11:55:32.666567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.669 [2024-07-15 11:55:32.666576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.669 [2024-07-15 11:55:32.669126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.669 [2024-07-15 11:55:32.678063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.669 [2024-07-15 11:55:32.678485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.669 [2024-07-15 11:55:32.678503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.669 [2024-07-15 11:55:32.678512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.669 [2024-07-15 11:55:32.678669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.669 [2024-07-15 11:55:32.678827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.669 [2024-07-15 11:55:32.678843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.669 [2024-07-15 11:55:32.678853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.669 [2024-07-15 11:55:32.681398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.669 [2024-07-15 11:55:32.690817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.669 [2024-07-15 11:55:32.691254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.669 [2024-07-15 11:55:32.691272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.669 [2024-07-15 11:55:32.691282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.669 [2024-07-15 11:55:32.691439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.670 [2024-07-15 11:55:32.691596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.670 [2024-07-15 11:55:32.691606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.670 [2024-07-15 11:55:32.691616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.670 [2024-07-15 11:55:32.694167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.670 [2024-07-15 11:55:32.703523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.670 [2024-07-15 11:55:32.704004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.670 [2024-07-15 11:55:32.704022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.670 [2024-07-15 11:55:32.704032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.670 [2024-07-15 11:55:32.704191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.670 [2024-07-15 11:55:32.704348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.670 [2024-07-15 11:55:32.704360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.670 [2024-07-15 11:55:32.704368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.670 [2024-07-15 11:55:32.706911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.670 [2024-07-15 11:55:32.716193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.670 [2024-07-15 11:55:32.716698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.670 [2024-07-15 11:55:32.716715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.670 [2024-07-15 11:55:32.716724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.670 [2024-07-15 11:55:32.716903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.670 [2024-07-15 11:55:32.717070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.670 [2024-07-15 11:55:32.717080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.670 [2024-07-15 11:55:32.717090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.670 [2024-07-15 11:55:32.719609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.670 [2024-07-15 11:55:32.728923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.670 [2024-07-15 11:55:32.729426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.670 [2024-07-15 11:55:32.729443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.670 [2024-07-15 11:55:32.729452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.670 [2024-07-15 11:55:32.729609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.670 [2024-07-15 11:55:32.729765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.670 [2024-07-15 11:55:32.729775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.670 [2024-07-15 11:55:32.729783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.670 [2024-07-15 11:55:32.732331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.670 [2024-07-15 11:55:32.741581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.670 [2024-07-15 11:55:32.742098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.670 [2024-07-15 11:55:32.742150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.670 [2024-07-15 11:55:32.742183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.670 [2024-07-15 11:55:32.742664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.670 [2024-07-15 11:55:32.742822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.670 [2024-07-15 11:55:32.742839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.670 [2024-07-15 11:55:32.742851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.670 [2024-07-15 11:55:32.745391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.670 [2024-07-15 11:55:32.754344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.670 [2024-07-15 11:55:32.754860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.670 [2024-07-15 11:55:32.754912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.670 [2024-07-15 11:55:32.754945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.670 [2024-07-15 11:55:32.755534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.670 [2024-07-15 11:55:32.756108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.670 [2024-07-15 11:55:32.756120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.670 [2024-07-15 11:55:32.756130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.670 [2024-07-15 11:55:32.758642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.670 [2024-07-15 11:55:32.767015] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.670 [2024-07-15 11:55:32.767528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.670 [2024-07-15 11:55:32.767578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.670 [2024-07-15 11:55:32.767610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.670 [2024-07-15 11:55:32.768065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.670 [2024-07-15 11:55:32.768238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.670 [2024-07-15 11:55:32.768249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.670 [2024-07-15 11:55:32.768259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.670 [2024-07-15 11:55:32.770953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.930 [2024-07-15 11:55:32.779990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.930 [2024-07-15 11:55:32.780511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-15 11:55:32.780562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.930 [2024-07-15 11:55:32.780595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.930 [2024-07-15 11:55:32.780991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.930 [2024-07-15 11:55:32.781158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.930 [2024-07-15 11:55:32.781170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.930 [2024-07-15 11:55:32.781180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.930 [2024-07-15 11:55:32.783745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.930 [2024-07-15 11:55:32.792693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.930 [2024-07-15 11:55:32.793212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-15 11:55:32.793263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.930 [2024-07-15 11:55:32.793297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.930 [2024-07-15 11:55:32.793818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.930 [2024-07-15 11:55:32.794006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.930 [2024-07-15 11:55:32.794018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.930 [2024-07-15 11:55:32.794027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.930 [2024-07-15 11:55:32.796545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.930 [2024-07-15 11:55:32.805408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.930 [2024-07-15 11:55:32.805931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-15 11:55:32.805984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.930 [2024-07-15 11:55:32.806016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.930 [2024-07-15 11:55:32.806605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.930 [2024-07-15 11:55:32.807190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.930 [2024-07-15 11:55:32.807203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.930 [2024-07-15 11:55:32.807212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.930 [2024-07-15 11:55:32.809721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.930 [2024-07-15 11:55:32.818152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.930 [2024-07-15 11:55:32.818589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-15 11:55:32.818607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.930 [2024-07-15 11:55:32.818618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.930 [2024-07-15 11:55:32.818782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.930 [2024-07-15 11:55:32.818955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.930 [2024-07-15 11:55:32.818966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.930 [2024-07-15 11:55:32.818975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.930 [2024-07-15 11:55:32.821570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.930 [2024-07-15 11:55:32.830952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.930 [2024-07-15 11:55:32.831391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.930 [2024-07-15 11:55:32.831441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.930 [2024-07-15 11:55:32.831475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.930 [2024-07-15 11:55:32.831934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.930 [2024-07-15 11:55:32.832129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.930 [2024-07-15 11:55:32.832141] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.930 [2024-07-15 11:55:32.832151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.931 [2024-07-15 11:55:32.834974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.931 [2024-07-15 11:55:32.843823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.931 [2024-07-15 11:55:32.844267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-15 11:55:32.844318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.931 [2024-07-15 11:55:32.844352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.931 [2024-07-15 11:55:32.844877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.931 [2024-07-15 11:55:32.845036] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.931 [2024-07-15 11:55:32.845046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.931 [2024-07-15 11:55:32.845056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.931 [2024-07-15 11:55:32.847566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.931 [2024-07-15 11:55:32.856622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.931 [2024-07-15 11:55:32.857163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-15 11:55:32.857215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.931 [2024-07-15 11:55:32.857248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.931 [2024-07-15 11:55:32.857856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.931 [2024-07-15 11:55:32.858250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.931 [2024-07-15 11:55:32.858261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.931 [2024-07-15 11:55:32.858270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.931 [2024-07-15 11:55:32.860816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.931 [2024-07-15 11:55:32.869474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.931 [2024-07-15 11:55:32.869988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-15 11:55:32.870041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.931 [2024-07-15 11:55:32.870075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.931 [2024-07-15 11:55:32.870421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.931 [2024-07-15 11:55:32.870580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.931 [2024-07-15 11:55:32.870592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.931 [2024-07-15 11:55:32.870601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.931 [2024-07-15 11:55:32.873209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.931 [2024-07-15 11:55:32.882402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.931 [2024-07-15 11:55:32.882891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-15 11:55:32.882909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.931 [2024-07-15 11:55:32.882919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.931 [2024-07-15 11:55:32.883076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.931 [2024-07-15 11:55:32.883233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.931 [2024-07-15 11:55:32.883245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.931 [2024-07-15 11:55:32.883253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.931 [2024-07-15 11:55:32.885834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.931 [2024-07-15 11:55:32.895086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.931 [2024-07-15 11:55:32.895583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-15 11:55:32.895634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.931 [2024-07-15 11:55:32.895668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.931 [2024-07-15 11:55:32.896130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.931 [2024-07-15 11:55:32.896289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.931 [2024-07-15 11:55:32.896300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.931 [2024-07-15 11:55:32.896309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.931 [2024-07-15 11:55:32.898890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.931 [2024-07-15 11:55:32.907836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.931 [2024-07-15 11:55:32.908382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-15 11:55:32.908433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.931 [2024-07-15 11:55:32.908465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.931 [2024-07-15 11:55:32.909067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.931 [2024-07-15 11:55:32.909391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.931 [2024-07-15 11:55:32.909402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.931 [2024-07-15 11:55:32.909411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.931 [2024-07-15 11:55:32.911943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.931 [2024-07-15 11:55:32.920693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.931 [2024-07-15 11:55:32.921208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-15 11:55:32.921275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.931 [2024-07-15 11:55:32.921308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.931 [2024-07-15 11:55:32.921912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.931 [2024-07-15 11:55:32.922304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.931 [2024-07-15 11:55:32.922316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.931 [2024-07-15 11:55:32.922324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.931 [2024-07-15 11:55:32.924870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.931 [2024-07-15 11:55:32.933534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.931 [2024-07-15 11:55:32.934050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-15 11:55:32.934103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.931 [2024-07-15 11:55:32.934135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.931 [2024-07-15 11:55:32.934724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.931 [2024-07-15 11:55:32.935271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.931 [2024-07-15 11:55:32.935283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.931 [2024-07-15 11:55:32.935293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.931 [2024-07-15 11:55:32.937851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.931 [2024-07-15 11:55:32.946315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.931 [2024-07-15 11:55:32.946744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-15 11:55:32.946761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.931 [2024-07-15 11:55:32.946771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.931 [2024-07-15 11:55:32.946954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.931 [2024-07-15 11:55:32.947121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.931 [2024-07-15 11:55:32.947132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.931 [2024-07-15 11:55:32.947141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.931 [2024-07-15 11:55:32.949654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.931 [2024-07-15 11:55:32.959025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.931 [2024-07-15 11:55:32.959534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-15 11:55:32.959588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.931 [2024-07-15 11:55:32.959621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.931 [2024-07-15 11:55:32.960204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.931 [2024-07-15 11:55:32.960374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.931 [2024-07-15 11:55:32.960386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.931 [2024-07-15 11:55:32.960396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.931 [2024-07-15 11:55:32.963956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.931 [2024-07-15 11:55:32.972412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.931 [2024-07-15 11:55:32.972900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-15 11:55:32.972918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.931 [2024-07-15 11:55:32.972928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.931 [2024-07-15 11:55:32.973085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.931 [2024-07-15 11:55:32.973243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.931 [2024-07-15 11:55:32.973253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.931 [2024-07-15 11:55:32.973262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.931 [2024-07-15 11:55:32.975808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.931 [2024-07-15 11:55:32.985172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.931 [2024-07-15 11:55:32.985690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.931 [2024-07-15 11:55:32.985742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.931 [2024-07-15 11:55:32.985774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.931 [2024-07-15 11:55:32.986349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.931 [2024-07-15 11:55:32.986508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.931 [2024-07-15 11:55:32.986519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.931 [2024-07-15 11:55:32.986527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.931 [2024-07-15 11:55:32.989021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.931 [2024-07-15 11:55:32.997819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.932 [2024-07-15 11:55:32.998336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-15 11:55:32.998387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.932 [2024-07-15 11:55:32.998419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.932 [2024-07-15 11:55:32.998752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.932 [2024-07-15 11:55:32.998934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.932 [2024-07-15 11:55:32.998946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.932 [2024-07-15 11:55:32.998956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.932 [2024-07-15 11:55:33.001480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.932 [2024-07-15 11:55:33.010483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.932 [2024-07-15 11:55:33.010986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-15 11:55:33.011038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.932 [2024-07-15 11:55:33.011071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.932 [2024-07-15 11:55:33.011545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.932 [2024-07-15 11:55:33.011702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.932 [2024-07-15 11:55:33.011713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.932 [2024-07-15 11:55:33.011722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.932 [2024-07-15 11:55:33.014276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:04.932 [2024-07-15 11:55:33.023235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:04.932 [2024-07-15 11:55:33.023737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.932 [2024-07-15 11:55:33.023755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:04.932 [2024-07-15 11:55:33.023764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:04.932 [2024-07-15 11:55:33.023946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:04.932 [2024-07-15 11:55:33.024113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:04.932 [2024-07-15 11:55:33.024123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:04.932 [2024-07-15 11:55:33.024132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:04.932 [2024-07-15 11:55:33.026638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.192 [2024-07-15 11:55:33.036155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.192 [2024-07-15 11:55:33.036675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.192 [2024-07-15 11:55:33.036726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.192 [2024-07-15 11:55:33.036759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.192 [2024-07-15 11:55:33.037367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.192 [2024-07-15 11:55:33.037762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.192 [2024-07-15 11:55:33.037774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.192 [2024-07-15 11:55:33.037783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.192 [2024-07-15 11:55:33.040455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.192 [2024-07-15 11:55:33.048923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.192 [2024-07-15 11:55:33.049428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.192 [2024-07-15 11:55:33.049445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.192 [2024-07-15 11:55:33.049457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.192 [2024-07-15 11:55:33.049613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.192 [2024-07-15 11:55:33.049770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.192 [2024-07-15 11:55:33.049780] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.192 [2024-07-15 11:55:33.049790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.192 [2024-07-15 11:55:33.052333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.192 [2024-07-15 11:55:33.061567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.192 [2024-07-15 11:55:33.062082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.192 [2024-07-15 11:55:33.062134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.192 [2024-07-15 11:55:33.062166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.192 [2024-07-15 11:55:33.062619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.192 [2024-07-15 11:55:33.062778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.192 [2024-07-15 11:55:33.062789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.192 [2024-07-15 11:55:33.062797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.192 [2024-07-15 11:55:33.065344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.192 [2024-07-15 11:55:33.074294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.192 [2024-07-15 11:55:33.074815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.192 [2024-07-15 11:55:33.074880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.192 [2024-07-15 11:55:33.074913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.192 [2024-07-15 11:55:33.075503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.192 [2024-07-15 11:55:33.075760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.192 [2024-07-15 11:55:33.075771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.192 [2024-07-15 11:55:33.075780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.192 [2024-07-15 11:55:33.078323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.192 [2024-07-15 11:55:33.087081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.192 [2024-07-15 11:55:33.087575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.192 [2024-07-15 11:55:33.087626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.192 [2024-07-15 11:55:33.087659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.192 [2024-07-15 11:55:33.088264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.192 [2024-07-15 11:55:33.088681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.192 [2024-07-15 11:55:33.088696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.192 [2024-07-15 11:55:33.088705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.192 [2024-07-15 11:55:33.091392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.192 [2024-07-15 11:55:33.100010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.192 [2024-07-15 11:55:33.100526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.192 [2024-07-15 11:55:33.100577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.192 [2024-07-15 11:55:33.100611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.192 [2024-07-15 11:55:33.100971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.192 [2024-07-15 11:55:33.101138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.192 [2024-07-15 11:55:33.101149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.192 [2024-07-15 11:55:33.101159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.192 [2024-07-15 11:55:33.103680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.192 [2024-07-15 11:55:33.112781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.193 [2024-07-15 11:55:33.113296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.193 [2024-07-15 11:55:33.113347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.193 [2024-07-15 11:55:33.113379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.193 [2024-07-15 11:55:33.113899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.193 [2024-07-15 11:55:33.114069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.193 [2024-07-15 11:55:33.114081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.193 [2024-07-15 11:55:33.114092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.193 [2024-07-15 11:55:33.116607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.193 [2024-07-15 11:55:33.125533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.193 [2024-07-15 11:55:33.126044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.193 [2024-07-15 11:55:33.126096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.193 [2024-07-15 11:55:33.126129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.193 [2024-07-15 11:55:33.126546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.193 [2024-07-15 11:55:33.126705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.193 [2024-07-15 11:55:33.126716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.193 [2024-07-15 11:55:33.126727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.193 [2024-07-15 11:55:33.129257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.193 [2024-07-15 11:55:33.138355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.193 [2024-07-15 11:55:33.138878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.193 [2024-07-15 11:55:33.138931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.193 [2024-07-15 11:55:33.138964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.193 [2024-07-15 11:55:33.139553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.193 [2024-07-15 11:55:33.139782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.193 [2024-07-15 11:55:33.139793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.193 [2024-07-15 11:55:33.139802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.193 [2024-07-15 11:55:33.142390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.193 [2024-07-15 11:55:33.151383] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.193 [2024-07-15 11:55:33.151901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.193 [2024-07-15 11:55:33.151920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.193 [2024-07-15 11:55:33.151930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.193 [2024-07-15 11:55:33.152099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.193 [2024-07-15 11:55:33.152270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.193 [2024-07-15 11:55:33.152280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.193 [2024-07-15 11:55:33.152290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.193 [2024-07-15 11:55:33.154959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.193 [2024-07-15 11:55:33.164413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.193 [2024-07-15 11:55:33.164930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.193 [2024-07-15 11:55:33.164948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.193 [2024-07-15 11:55:33.164958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.193 [2024-07-15 11:55:33.165128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.193 [2024-07-15 11:55:33.165298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.193 [2024-07-15 11:55:33.165308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.193 [2024-07-15 11:55:33.165318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.193 [2024-07-15 11:55:33.167992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.193 [2024-07-15 11:55:33.177305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.193 [2024-07-15 11:55:33.177752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.193 [2024-07-15 11:55:33.177770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.193 [2024-07-15 11:55:33.177780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.193 [2024-07-15 11:55:33.177961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.193 [2024-07-15 11:55:33.178132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.193 [2024-07-15 11:55:33.178144] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.193 [2024-07-15 11:55:33.178153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.193 [2024-07-15 11:55:33.180822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.193 [2024-07-15 11:55:33.190272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.193 [2024-07-15 11:55:33.190795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.193 [2024-07-15 11:55:33.190814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.193 [2024-07-15 11:55:33.190824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.193 [2024-07-15 11:55:33.190999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.193 [2024-07-15 11:55:33.191170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.193 [2024-07-15 11:55:33.191182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.193 [2024-07-15 11:55:33.191191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.193 [2024-07-15 11:55:33.193860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.193 [2024-07-15 11:55:33.203172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.193 [2024-07-15 11:55:33.203701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.193 [2024-07-15 11:55:33.203719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.193 [2024-07-15 11:55:33.203729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.193 [2024-07-15 11:55:33.203904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.193 [2024-07-15 11:55:33.204074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.193 [2024-07-15 11:55:33.204084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.193 [2024-07-15 11:55:33.204094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.193 [2024-07-15 11:55:33.206763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.193 [2024-07-15 11:55:33.216083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.193 [2024-07-15 11:55:33.216608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.193 [2024-07-15 11:55:33.216627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.193 [2024-07-15 11:55:33.216637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.193 [2024-07-15 11:55:33.216806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.193 [2024-07-15 11:55:33.216982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.193 [2024-07-15 11:55:33.216994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.193 [2024-07-15 11:55:33.217006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.193 [2024-07-15 11:55:33.219691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.193 [2024-07-15 11:55:33.229047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.193 [2024-07-15 11:55:33.229567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.193 [2024-07-15 11:55:33.229586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.193 [2024-07-15 11:55:33.229596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.193 [2024-07-15 11:55:33.229766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.193 [2024-07-15 11:55:33.229942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.193 [2024-07-15 11:55:33.229954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.193 [2024-07-15 11:55:33.229963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.193 [2024-07-15 11:55:33.232633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.193 [2024-07-15 11:55:33.241932] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.193 [2024-07-15 11:55:33.242448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.193 [2024-07-15 11:55:33.242467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.193 [2024-07-15 11:55:33.242477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.193 [2024-07-15 11:55:33.242646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.193 [2024-07-15 11:55:33.242816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.193 [2024-07-15 11:55:33.242827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.193 [2024-07-15 11:55:33.242842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.193 [2024-07-15 11:55:33.245511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.193 [2024-07-15 11:55:33.254803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.194 [2024-07-15 11:55:33.255327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.194 [2024-07-15 11:55:33.255346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.194 [2024-07-15 11:55:33.255356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.194 [2024-07-15 11:55:33.255525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.194 [2024-07-15 11:55:33.255695] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.194 [2024-07-15 11:55:33.255705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.194 [2024-07-15 11:55:33.255714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.194 [2024-07-15 11:55:33.258386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.194 [2024-07-15 11:55:33.267699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.194 [2024-07-15 11:55:33.268238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.194 [2024-07-15 11:55:33.268289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.194 [2024-07-15 11:55:33.268323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.194 [2024-07-15 11:55:33.268927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.194 [2024-07-15 11:55:33.269328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.194 [2024-07-15 11:55:33.269341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.194 [2024-07-15 11:55:33.269350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.194 [2024-07-15 11:55:33.272025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.194 [2024-07-15 11:55:33.280722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.194 [2024-07-15 11:55:33.281119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.194 [2024-07-15 11:55:33.281138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.194 [2024-07-15 11:55:33.281149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.194 [2024-07-15 11:55:33.281319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.194 [2024-07-15 11:55:33.281490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.194 [2024-07-15 11:55:33.281502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.194 [2024-07-15 11:55:33.281511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.194 [2024-07-15 11:55:33.284138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.194 [2024-07-15 11:55:33.293656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.194 [2024-07-15 11:55:33.294120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.194 [2024-07-15 11:55:33.294139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.194 [2024-07-15 11:55:33.294149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.194 [2024-07-15 11:55:33.294319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.194 [2024-07-15 11:55:33.294490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.194 [2024-07-15 11:55:33.294501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.194 [2024-07-15 11:55:33.294511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.452 [2024-07-15 11:55:33.297420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.452 [2024-07-15 11:55:33.306550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.452 [2024-07-15 11:55:33.307019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.452 [2024-07-15 11:55:33.307075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.452 [2024-07-15 11:55:33.307109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.452 [2024-07-15 11:55:33.307700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.452 [2024-07-15 11:55:33.308278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.452 [2024-07-15 11:55:33.308290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.452 [2024-07-15 11:55:33.308298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.452 [2024-07-15 11:55:33.310885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.452 [2024-07-15 11:55:33.319372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.452 [2024-07-15 11:55:33.319889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.452 [2024-07-15 11:55:33.319950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.452 [2024-07-15 11:55:33.319984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.452 [2024-07-15 11:55:33.320499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.453 [2024-07-15 11:55:33.320658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.453 [2024-07-15 11:55:33.320669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.453 [2024-07-15 11:55:33.320678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.453 [2024-07-15 11:55:33.323177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.453 [2024-07-15 11:55:33.332230] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.453 [2024-07-15 11:55:33.332651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.453 [2024-07-15 11:55:33.332671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.453 [2024-07-15 11:55:33.332681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.453 [2024-07-15 11:55:33.332843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.453 [2024-07-15 11:55:33.333000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.453 [2024-07-15 11:55:33.333011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.453 [2024-07-15 11:55:33.333020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.453 [2024-07-15 11:55:33.335562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.453 [2024-07-15 11:55:33.345064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.453 [2024-07-15 11:55:33.345568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.453 [2024-07-15 11:55:33.345586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.453 [2024-07-15 11:55:33.345595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.453 [2024-07-15 11:55:33.345751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.453 [2024-07-15 11:55:33.345935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.453 [2024-07-15 11:55:33.345947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.453 [2024-07-15 11:55:33.345956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.453 [2024-07-15 11:55:33.348651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.453 [2024-07-15 11:55:33.357998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.453 [2024-07-15 11:55:33.358450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.453 [2024-07-15 11:55:33.358503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.453 [2024-07-15 11:55:33.358536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.453 [2024-07-15 11:55:33.359057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.453 [2024-07-15 11:55:33.359217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.453 [2024-07-15 11:55:33.359228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.453 [2024-07-15 11:55:33.359237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.453 [2024-07-15 11:55:33.361700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.453 [2024-07-15 11:55:33.370803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.453 [2024-07-15 11:55:33.371208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.453 [2024-07-15 11:55:33.371261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.453 [2024-07-15 11:55:33.371294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.453 [2024-07-15 11:55:33.371683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.453 [2024-07-15 11:55:33.371848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.453 [2024-07-15 11:55:33.371860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.453 [2024-07-15 11:55:33.371869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.453 [2024-07-15 11:55:33.374368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.453 [2024-07-15 11:55:33.383616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.453 [2024-07-15 11:55:33.384184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.453 [2024-07-15 11:55:33.384237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.453 [2024-07-15 11:55:33.384271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.453 [2024-07-15 11:55:33.384628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.453 [2024-07-15 11:55:33.384786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.453 [2024-07-15 11:55:33.384797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.453 [2024-07-15 11:55:33.384806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.453 [2024-07-15 11:55:33.387303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.453 [2024-07-15 11:55:33.396366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.453 [2024-07-15 11:55:33.396831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.453 [2024-07-15 11:55:33.396895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.453 [2024-07-15 11:55:33.396936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.453 [2024-07-15 11:55:33.397456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.453 [2024-07-15 11:55:33.397623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.453 [2024-07-15 11:55:33.397634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.453 [2024-07-15 11:55:33.397644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.453 [2024-07-15 11:55:33.400131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.453 [2024-07-15 11:55:33.409177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.453 [2024-07-15 11:55:33.409742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.453 [2024-07-15 11:55:33.409793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.453 [2024-07-15 11:55:33.409827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.453 [2024-07-15 11:55:33.410275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.453 [2024-07-15 11:55:33.410433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.453 [2024-07-15 11:55:33.410445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.453 [2024-07-15 11:55:33.410454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.453 [2024-07-15 11:55:33.412941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.453 [2024-07-15 11:55:33.421902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.453 [2024-07-15 11:55:33.422288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.453 [2024-07-15 11:55:33.422341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.453 [2024-07-15 11:55:33.422374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.453 [2024-07-15 11:55:33.422758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.453 [2024-07-15 11:55:33.422941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.453 [2024-07-15 11:55:33.422953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.453 [2024-07-15 11:55:33.422962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.453 [2024-07-15 11:55:33.425491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.453 [2024-07-15 11:55:33.434581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.453 [2024-07-15 11:55:33.435061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.453 [2024-07-15 11:55:33.435080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.453 [2024-07-15 11:55:33.435091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.453 [2024-07-15 11:55:33.435256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.453 [2024-07-15 11:55:33.435425] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.453 [2024-07-15 11:55:33.435437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.453 [2024-07-15 11:55:33.435445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.453 [2024-07-15 11:55:33.437950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.453 [2024-07-15 11:55:33.447331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.453 [2024-07-15 11:55:33.447862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.453 [2024-07-15 11:55:33.447914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.453 [2024-07-15 11:55:33.447947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.453 [2024-07-15 11:55:33.448373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.453 [2024-07-15 11:55:33.448531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.453 [2024-07-15 11:55:33.448542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.453 [2024-07-15 11:55:33.448551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.453 [2024-07-15 11:55:33.451039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.453 [2024-07-15 11:55:33.460137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.453 [2024-07-15 11:55:33.460635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.453 [2024-07-15 11:55:33.460653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.453 [2024-07-15 11:55:33.460663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.453 [2024-07-15 11:55:33.460819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.453 [2024-07-15 11:55:33.461004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.453 [2024-07-15 11:55:33.461017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.453 [2024-07-15 11:55:33.461026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.453 [2024-07-15 11:55:33.463550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.453 [2024-07-15 11:55:33.472946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.453 [2024-07-15 11:55:33.473458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.453 [2024-07-15 11:55:33.473477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.454 [2024-07-15 11:55:33.473487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.454 [2024-07-15 11:55:33.473654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.454 [2024-07-15 11:55:33.473820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.454 [2024-07-15 11:55:33.473838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.454 [2024-07-15 11:55:33.473848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.454 [2024-07-15 11:55:33.476450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.454 [2024-07-15 11:55:33.485709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.454 [2024-07-15 11:55:33.486239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.454 [2024-07-15 11:55:33.486293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.454 [2024-07-15 11:55:33.486326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.454 [2024-07-15 11:55:33.486688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.454 [2024-07-15 11:55:33.486853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.454 [2024-07-15 11:55:33.486865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.454 [2024-07-15 11:55:33.486873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.454 [2024-07-15 11:55:33.489372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.454 [2024-07-15 11:55:33.498453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.454 [2024-07-15 11:55:33.498941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.454 [2024-07-15 11:55:33.498959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.454 [2024-07-15 11:55:33.498969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.454 [2024-07-15 11:55:33.499136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.454 [2024-07-15 11:55:33.499301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.454 [2024-07-15 11:55:33.499312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.454 [2024-07-15 11:55:33.499321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.454 [2024-07-15 11:55:33.501830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.454 [2024-07-15 11:55:33.511245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.454 [2024-07-15 11:55:33.511754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.454 [2024-07-15 11:55:33.511805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.454 [2024-07-15 11:55:33.511852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.454 [2024-07-15 11:55:33.512443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.454 [2024-07-15 11:55:33.512940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.454 [2024-07-15 11:55:33.512951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.454 [2024-07-15 11:55:33.512960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.454 [2024-07-15 11:55:33.515490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.454 [2024-07-15 11:55:33.524162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.454 [2024-07-15 11:55:33.524677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.454 [2024-07-15 11:55:33.524729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.454 [2024-07-15 11:55:33.524769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.454 [2024-07-15 11:55:33.525379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.454 [2024-07-15 11:55:33.525893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.454 [2024-07-15 11:55:33.525905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.454 [2024-07-15 11:55:33.525913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.454 [2024-07-15 11:55:33.528476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.454 [2024-07-15 11:55:33.536974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.454 [2024-07-15 11:55:33.537436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.454 [2024-07-15 11:55:33.537454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.454 [2024-07-15 11:55:33.537463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.454 [2024-07-15 11:55:33.537621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.454 [2024-07-15 11:55:33.537778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.454 [2024-07-15 11:55:33.537789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.454 [2024-07-15 11:55:33.537798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.454 [2024-07-15 11:55:33.540350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.454 [2024-07-15 11:55:33.549825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.454 [2024-07-15 11:55:33.550205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.454 [2024-07-15 11:55:33.550257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.454 [2024-07-15 11:55:33.550291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.454 [2024-07-15 11:55:33.550766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.454 [2024-07-15 11:55:33.550951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.454 [2024-07-15 11:55:33.550969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.454 [2024-07-15 11:55:33.550980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.454 [2024-07-15 11:55:33.553622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.713 [2024-07-15 11:55:33.562776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.713 [2024-07-15 11:55:33.563228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.713 [2024-07-15 11:55:33.563247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.713 [2024-07-15 11:55:33.563258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.713 [2024-07-15 11:55:33.563424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.713 [2024-07-15 11:55:33.563590] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.713 [2024-07-15 11:55:33.563607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.713 [2024-07-15 11:55:33.563618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.713 [2024-07-15 11:55:33.566149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.713 [2024-07-15 11:55:33.575680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.713 [2024-07-15 11:55:33.576128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.713 [2024-07-15 11:55:33.576147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.713 [2024-07-15 11:55:33.576157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.713 [2024-07-15 11:55:33.576325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.713 [2024-07-15 11:55:33.576497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.713 [2024-07-15 11:55:33.576509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.713 [2024-07-15 11:55:33.576519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.713 [2024-07-15 11:55:33.579117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.713 [2024-07-15 11:55:33.588504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.713 [2024-07-15 11:55:33.589059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.713 [2024-07-15 11:55:33.589112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.713 [2024-07-15 11:55:33.589146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.713 [2024-07-15 11:55:33.589652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.713 [2024-07-15 11:55:33.589819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.713 [2024-07-15 11:55:33.589830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.713 [2024-07-15 11:55:33.589845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.713 [2024-07-15 11:55:33.592442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.713 [2024-07-15 11:55:33.601552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.713 [2024-07-15 11:55:33.602025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.713 [2024-07-15 11:55:33.602044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.713 [2024-07-15 11:55:33.602054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.713 [2024-07-15 11:55:33.602220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.713 [2024-07-15 11:55:33.602387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.713 [2024-07-15 11:55:33.602398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.713 [2024-07-15 11:55:33.602408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.713 [2024-07-15 11:55:33.605098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.713 [2024-07-15 11:55:33.614475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.713 [2024-07-15 11:55:33.614946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.713 [2024-07-15 11:55:33.615000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.714 [2024-07-15 11:55:33.615033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.714 [2024-07-15 11:55:33.615622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.714 [2024-07-15 11:55:33.615819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.714 [2024-07-15 11:55:33.615830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.714 [2024-07-15 11:55:33.615845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.714 [2024-07-15 11:55:33.618348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.714 [2024-07-15 11:55:33.627317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.714 [2024-07-15 11:55:33.627701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.714 [2024-07-15 11:55:33.627752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.714 [2024-07-15 11:55:33.627784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.714 [2024-07-15 11:55:33.628248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.714 [2024-07-15 11:55:33.628416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.714 [2024-07-15 11:55:33.628428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.714 [2024-07-15 11:55:33.628437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.714 [2024-07-15 11:55:33.630942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.714 [2024-07-15 11:55:33.640073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.714 [2024-07-15 11:55:33.640425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.714 [2024-07-15 11:55:33.640475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.714 [2024-07-15 11:55:33.640508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.714 [2024-07-15 11:55:33.640968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.714 [2024-07-15 11:55:33.641127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.714 [2024-07-15 11:55:33.641138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.714 [2024-07-15 11:55:33.641148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.714 [2024-07-15 11:55:33.643690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.714 [2024-07-15 11:55:33.652880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.714 [2024-07-15 11:55:33.653945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.714 [2024-07-15 11:55:33.653971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.714 [2024-07-15 11:55:33.653983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.714 [2024-07-15 11:55:33.654163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.714 [2024-07-15 11:55:33.654331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.714 [2024-07-15 11:55:33.654342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.714 [2024-07-15 11:55:33.654351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.714 [2024-07-15 11:55:33.656871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.714 [2024-07-15 11:55:33.665649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.714 [2024-07-15 11:55:33.666102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.714 [2024-07-15 11:55:33.666157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.714 [2024-07-15 11:55:33.666191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.714 [2024-07-15 11:55:33.666784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.714 [2024-07-15 11:55:33.667297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.714 [2024-07-15 11:55:33.667310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.714 [2024-07-15 11:55:33.667319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.714 [2024-07-15 11:55:33.669948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.714 [2024-07-15 11:55:33.678544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.714 [2024-07-15 11:55:33.678927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.714 [2024-07-15 11:55:33.678947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.714 [2024-07-15 11:55:33.678958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.714 [2024-07-15 11:55:33.679116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.714 [2024-07-15 11:55:33.679275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.714 [2024-07-15 11:55:33.679286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.714 [2024-07-15 11:55:33.679296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.714 [2024-07-15 11:55:33.681940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.714 [2024-07-15 11:55:33.691574] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.714 [2024-07-15 11:55:33.692014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.714 [2024-07-15 11:55:33.692034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.714 [2024-07-15 11:55:33.692044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.714 [2024-07-15 11:55:33.692215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.714 [2024-07-15 11:55:33.692385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.714 [2024-07-15 11:55:33.692397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.714 [2024-07-15 11:55:33.692410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.714 [2024-07-15 11:55:33.695089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.714 [2024-07-15 11:55:33.704561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.714 [2024-07-15 11:55:33.705082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.714 [2024-07-15 11:55:33.705101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.714 [2024-07-15 11:55:33.705111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.714 [2024-07-15 11:55:33.705281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.714 [2024-07-15 11:55:33.705452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.714 [2024-07-15 11:55:33.705464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.714 [2024-07-15 11:55:33.705474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.714 [2024-07-15 11:55:33.708151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.714 [2024-07-15 11:55:33.717484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.714 [2024-07-15 11:55:33.718006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.714 [2024-07-15 11:55:33.718026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.714 [2024-07-15 11:55:33.718036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.714 [2024-07-15 11:55:33.718208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.714 [2024-07-15 11:55:33.718379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.714 [2024-07-15 11:55:33.718390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.714 [2024-07-15 11:55:33.718400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.714 [2024-07-15 11:55:33.721074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.714 [2024-07-15 11:55:33.730375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.714 [2024-07-15 11:55:33.730874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.714 [2024-07-15 11:55:33.730893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.715 [2024-07-15 11:55:33.730903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.715 [2024-07-15 11:55:33.731074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.715 [2024-07-15 11:55:33.731244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.715 [2024-07-15 11:55:33.731256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.715 [2024-07-15 11:55:33.731265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.715 [2024-07-15 11:55:33.733938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.715 [2024-07-15 11:55:33.743406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.715 [2024-07-15 11:55:33.743951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.715 [2024-07-15 11:55:33.744012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.715 [2024-07-15 11:55:33.744045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.715 [2024-07-15 11:55:33.744519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.715 [2024-07-15 11:55:33.744687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.715 [2024-07-15 11:55:33.744699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.715 [2024-07-15 11:55:33.744708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.715 [2024-07-15 11:55:33.747397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.715 [2024-07-15 11:55:33.756372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.715 [2024-07-15 11:55:33.756898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.715 [2024-07-15 11:55:33.756950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.715 [2024-07-15 11:55:33.756984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.715 [2024-07-15 11:55:33.757397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.715 [2024-07-15 11:55:33.757563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.715 [2024-07-15 11:55:33.757575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.715 [2024-07-15 11:55:33.757584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.715 [2024-07-15 11:55:33.760248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.715 [2024-07-15 11:55:33.769153] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.715 [2024-07-15 11:55:33.769659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.715 [2024-07-15 11:55:33.769677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.715 [2024-07-15 11:55:33.769686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.715 [2024-07-15 11:55:33.769850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.715 [2024-07-15 11:55:33.770008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.715 [2024-07-15 11:55:33.770019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.715 [2024-07-15 11:55:33.770027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.715 [2024-07-15 11:55:33.772540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.715 [2024-07-15 11:55:33.781876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.715 [2024-07-15 11:55:33.782396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.715 [2024-07-15 11:55:33.782448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.715 [2024-07-15 11:55:33.782481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.715 [2024-07-15 11:55:33.783089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.715 [2024-07-15 11:55:33.783567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.715 [2024-07-15 11:55:33.783579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.715 [2024-07-15 11:55:33.783588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.715 [2024-07-15 11:55:33.786079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.715 [2024-07-15 11:55:33.794525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.715 [2024-07-15 11:55:33.795045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.715 [2024-07-15 11:55:33.795097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.715 [2024-07-15 11:55:33.795132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.715 [2024-07-15 11:55:33.795584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.715 [2024-07-15 11:55:33.795743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.715 [2024-07-15 11:55:33.795754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.715 [2024-07-15 11:55:33.795764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.715 [2024-07-15 11:55:33.798314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.715 [2024-07-15 11:55:33.807302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.715 [2024-07-15 11:55:33.807808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.715 [2024-07-15 11:55:33.807824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.715 [2024-07-15 11:55:33.807840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.715 [2024-07-15 11:55:33.808020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.715 [2024-07-15 11:55:33.808186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.715 [2024-07-15 11:55:33.808197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.715 [2024-07-15 11:55:33.808206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.715 [2024-07-15 11:55:33.810718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.975 [2024-07-15 11:55:33.820237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.975 [2024-07-15 11:55:33.820751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.975 [2024-07-15 11:55:33.820802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.975 [2024-07-15 11:55:33.820848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.975 [2024-07-15 11:55:33.821215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.975 [2024-07-15 11:55:33.821382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.976 [2024-07-15 11:55:33.821393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.976 [2024-07-15 11:55:33.821402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.976 [2024-07-15 11:55:33.824085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.976 [2024-07-15 11:55:33.833058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.976 [2024-07-15 11:55:33.833580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.976 [2024-07-15 11:55:33.833631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.976 [2024-07-15 11:55:33.833664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.976 [2024-07-15 11:55:33.834205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.976 [2024-07-15 11:55:33.834372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.976 [2024-07-15 11:55:33.834383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.976 [2024-07-15 11:55:33.834392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.976 [2024-07-15 11:55:33.836894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.976 [2024-07-15 11:55:33.845843] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.976 [2024-07-15 11:55:33.846356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.976 [2024-07-15 11:55:33.846408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.976 [2024-07-15 11:55:33.846441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.976 [2024-07-15 11:55:33.846860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.976 [2024-07-15 11:55:33.847020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.976 [2024-07-15 11:55:33.847031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.976 [2024-07-15 11:55:33.847040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.976 [2024-07-15 11:55:33.849503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.976 [2024-07-15 11:55:33.858510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.976 [2024-07-15 11:55:33.859051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.976 [2024-07-15 11:55:33.859105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.976 [2024-07-15 11:55:33.859138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.976 [2024-07-15 11:55:33.859644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.976 [2024-07-15 11:55:33.859803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.976 [2024-07-15 11:55:33.859813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.976 [2024-07-15 11:55:33.859822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.976 [2024-07-15 11:55:33.862680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.976 [2024-07-15 11:55:33.871381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.976 [2024-07-15 11:55:33.871892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.976 [2024-07-15 11:55:33.871945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.976 [2024-07-15 11:55:33.871987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.976 [2024-07-15 11:55:33.872391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.976 [2024-07-15 11:55:33.872549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.976 [2024-07-15 11:55:33.872559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.976 [2024-07-15 11:55:33.872568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.976 [2024-07-15 11:55:33.875058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.976 [2024-07-15 11:55:33.884132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.976 [2024-07-15 11:55:33.884655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.976 [2024-07-15 11:55:33.884706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.976 [2024-07-15 11:55:33.884739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.976 [2024-07-15 11:55:33.885346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.976 [2024-07-15 11:55:33.885818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.976 [2024-07-15 11:55:33.885829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.976 [2024-07-15 11:55:33.885842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.976 [2024-07-15 11:55:33.888312] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.976 [2024-07-15 11:55:33.896854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.976 [2024-07-15 11:55:33.897297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.976 [2024-07-15 11:55:33.897349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.976 [2024-07-15 11:55:33.897381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.976 [2024-07-15 11:55:33.897874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.976 [2024-07-15 11:55:33.898042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.976 [2024-07-15 11:55:33.898053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.976 [2024-07-15 11:55:33.898062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.976 [2024-07-15 11:55:33.900579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.976 [2024-07-15 11:55:33.909748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.976 [2024-07-15 11:55:33.910109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.976 [2024-07-15 11:55:33.910149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.976 [2024-07-15 11:55:33.910183] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.976 [2024-07-15 11:55:33.910740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.976 [2024-07-15 11:55:33.910985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.976 [2024-07-15 11:55:33.911005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.976 [2024-07-15 11:55:33.911018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.977 [2024-07-15 11:55:33.914758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.977 [2024-07-15 11:55:33.923030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.977 [2024-07-15 11:55:33.923536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.977 [2024-07-15 11:55:33.923554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.977 [2024-07-15 11:55:33.923564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.977 [2024-07-15 11:55:33.923721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.977 [2024-07-15 11:55:33.923882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.977 [2024-07-15 11:55:33.923894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.977 [2024-07-15 11:55:33.923902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.977 [2024-07-15 11:55:33.926403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.977 [2024-07-15 11:55:33.935837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.977 [2024-07-15 11:55:33.936359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.977 [2024-07-15 11:55:33.936409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.977 [2024-07-15 11:55:33.936443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.977 [2024-07-15 11:55:33.936876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.977 [2024-07-15 11:55:33.937034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.977 [2024-07-15 11:55:33.937046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.977 [2024-07-15 11:55:33.937054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.977 [2024-07-15 11:55:33.939598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.977 [2024-07-15 11:55:33.948638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.977 [2024-07-15 11:55:33.949162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.977 [2024-07-15 11:55:33.949216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.977 [2024-07-15 11:55:33.949249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.977 [2024-07-15 11:55:33.949741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.977 [2024-07-15 11:55:33.949922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.977 [2024-07-15 11:55:33.949934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.977 [2024-07-15 11:55:33.949944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.977 [2024-07-15 11:55:33.952538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.977 [2024-07-15 11:55:33.961494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.977 [2024-07-15 11:55:33.962026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.977 [2024-07-15 11:55:33.962079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.977 [2024-07-15 11:55:33.962113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.977 [2024-07-15 11:55:33.962596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.977 [2024-07-15 11:55:33.962754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.977 [2024-07-15 11:55:33.962765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.977 [2024-07-15 11:55:33.962774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.977 [2024-07-15 11:55:33.965328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.977 [2024-07-15 11:55:33.974265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.977 [2024-07-15 11:55:33.974712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.977 [2024-07-15 11:55:33.974764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.977 [2024-07-15 11:55:33.974797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.977 [2024-07-15 11:55:33.975407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.977 [2024-07-15 11:55:33.975862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.977 [2024-07-15 11:55:33.975874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.977 [2024-07-15 11:55:33.975882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.977 [2024-07-15 11:55:33.978434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.977 [2024-07-15 11:55:33.987038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.977 [2024-07-15 11:55:33.987556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.977 [2024-07-15 11:55:33.987607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.977 [2024-07-15 11:55:33.987641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.977 [2024-07-15 11:55:33.987982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.977 [2024-07-15 11:55:33.988140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.977 [2024-07-15 11:55:33.988151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.977 [2024-07-15 11:55:33.988160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.977 [2024-07-15 11:55:33.990729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.977 [2024-07-15 11:55:33.999820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.977 [2024-07-15 11:55:34.000276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.977 [2024-07-15 11:55:34.000330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.977 [2024-07-15 11:55:34.000363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.977 [2024-07-15 11:55:34.000838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.977 [2024-07-15 11:55:34.001024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.977 [2024-07-15 11:55:34.001035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.977 [2024-07-15 11:55:34.001044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.978 [2024-07-15 11:55:34.003548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.978 [2024-07-15 11:55:34.012492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.978 [2024-07-15 11:55:34.012994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.978 [2024-07-15 11:55:34.013047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.978 [2024-07-15 11:55:34.013081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.978 [2024-07-15 11:55:34.013674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.978 [2024-07-15 11:55:34.014109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.978 [2024-07-15 11:55:34.014121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.978 [2024-07-15 11:55:34.014130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.978 [2024-07-15 11:55:34.016693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.978 [2024-07-15 11:55:34.025200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.978 [2024-07-15 11:55:34.025689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.978 [2024-07-15 11:55:34.025707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.978 [2024-07-15 11:55:34.025716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.978 [2024-07-15 11:55:34.025880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.978 [2024-07-15 11:55:34.026062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.978 [2024-07-15 11:55:34.026073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.978 [2024-07-15 11:55:34.026082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.978 [2024-07-15 11:55:34.028680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.978 [2024-07-15 11:55:34.037905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.978 [2024-07-15 11:55:34.038337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.978 [2024-07-15 11:55:34.038354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.978 [2024-07-15 11:55:34.038364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.978 [2024-07-15 11:55:34.038520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.978 [2024-07-15 11:55:34.038678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.978 [2024-07-15 11:55:34.038689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.978 [2024-07-15 11:55:34.038701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.978 [2024-07-15 11:55:34.041249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.978 [2024-07-15 11:55:34.050620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.978 [2024-07-15 11:55:34.051058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.978 [2024-07-15 11:55:34.051076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.978 [2024-07-15 11:55:34.051086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.978 [2024-07-15 11:55:34.051251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.978 [2024-07-15 11:55:34.051417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.978 [2024-07-15 11:55:34.051428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.978 [2024-07-15 11:55:34.051436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.978 [2024-07-15 11:55:34.054037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.978 [2024-07-15 11:55:34.063364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.978 [2024-07-15 11:55:34.063812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.978 [2024-07-15 11:55:34.063876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.978 [2024-07-15 11:55:34.063911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.978 [2024-07-15 11:55:34.064275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.978 [2024-07-15 11:55:34.064433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.978 [2024-07-15 11:55:34.064444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.978 [2024-07-15 11:55:34.064453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:05.978 [2024-07-15 11:55:34.067002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:05.978 [2024-07-15 11:55:34.076224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:05.978 [2024-07-15 11:55:34.076698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.978 [2024-07-15 11:55:34.076761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:05.978 [2024-07-15 11:55:34.076795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:05.978 [2024-07-15 11:55:34.077206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:05.978 [2024-07-15 11:55:34.077377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:05.978 [2024-07-15 11:55:34.077389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:05.978 [2024-07-15 11:55:34.077398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.247 [2024-07-15 11:55:34.080075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.247 [2024-07-15 11:55:34.089061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.247 [2024-07-15 11:55:34.089572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.247 [2024-07-15 11:55:34.089589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.247 [2024-07-15 11:55:34.089599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.247 [2024-07-15 11:55:34.089755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.247 [2024-07-15 11:55:34.089938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.247 [2024-07-15 11:55:34.089950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.247 [2024-07-15 11:55:34.089958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.247 [2024-07-15 11:55:34.092527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.247 [2024-07-15 11:55:34.101818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.247 [2024-07-15 11:55:34.102329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.247 [2024-07-15 11:55:34.102381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.247 [2024-07-15 11:55:34.102414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.247 [2024-07-15 11:55:34.102952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.247 [2024-07-15 11:55:34.103154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.247 [2024-07-15 11:55:34.103169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.247 [2024-07-15 11:55:34.103182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.247 [2024-07-15 11:55:34.106923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.247 [2024-07-15 11:55:34.115041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.247 [2024-07-15 11:55:34.115467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.247 [2024-07-15 11:55:34.115485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.247 [2024-07-15 11:55:34.115495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.247 [2024-07-15 11:55:34.115651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.247 [2024-07-15 11:55:34.115809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.247 [2024-07-15 11:55:34.115820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.247 [2024-07-15 11:55:34.115829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.247 [2024-07-15 11:55:34.118481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.247 [2024-07-15 11:55:34.127894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.247 [2024-07-15 11:55:34.128357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.247 [2024-07-15 11:55:34.128408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.247 [2024-07-15 11:55:34.128441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.247 [2024-07-15 11:55:34.129029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.247 [2024-07-15 11:55:34.129188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.247 [2024-07-15 11:55:34.129199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.247 [2024-07-15 11:55:34.129208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.247 [2024-07-15 11:55:34.131671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.247 [2024-07-15 11:55:34.140548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.247 [2024-07-15 11:55:34.141063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.247 [2024-07-15 11:55:34.141116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.247 [2024-07-15 11:55:34.141149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.247 [2024-07-15 11:55:34.141543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.247 [2024-07-15 11:55:34.141701] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.247 [2024-07-15 11:55:34.141712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.247 [2024-07-15 11:55:34.141721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.247 [2024-07-15 11:55:34.144269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.247 [2024-07-15 11:55:34.153305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.247 [2024-07-15 11:55:34.153826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.247 [2024-07-15 11:55:34.153889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.247 [2024-07-15 11:55:34.153922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.247 [2024-07-15 11:55:34.154424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.247 [2024-07-15 11:55:34.154583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.247 [2024-07-15 11:55:34.154594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.247 [2024-07-15 11:55:34.154602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.247 [2024-07-15 11:55:34.157146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.247 [2024-07-15 11:55:34.166003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.247 [2024-07-15 11:55:34.166513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.247 [2024-07-15 11:55:34.166530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.247 [2024-07-15 11:55:34.166540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.247 [2024-07-15 11:55:34.166696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.247 [2024-07-15 11:55:34.166859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.247 [2024-07-15 11:55:34.166870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.247 [2024-07-15 11:55:34.166898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.247 [2024-07-15 11:55:34.169427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.247 [2024-07-15 11:55:34.178665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.247 [2024-07-15 11:55:34.179192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.247 [2024-07-15 11:55:34.179246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.247 [2024-07-15 11:55:34.179279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.247 [2024-07-15 11:55:34.179886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.247 [2024-07-15 11:55:34.180363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.247 [2024-07-15 11:55:34.180375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.247 [2024-07-15 11:55:34.180384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.247 [2024-07-15 11:55:34.182881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.247 [2024-07-15 11:55:34.191392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.248 [2024-07-15 11:55:34.191889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.248 [2024-07-15 11:55:34.191941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.248 [2024-07-15 11:55:34.191974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.248 [2024-07-15 11:55:34.192212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.248 [2024-07-15 11:55:34.192370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.248 [2024-07-15 11:55:34.192381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.248 [2024-07-15 11:55:34.192390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.248 [2024-07-15 11:55:34.194943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.248 [2024-07-15 11:55:34.204070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.248 [2024-07-15 11:55:34.204591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.248 [2024-07-15 11:55:34.204643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.248 [2024-07-15 11:55:34.204675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.248 [2024-07-15 11:55:34.205117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.248 [2024-07-15 11:55:34.205283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.248 [2024-07-15 11:55:34.205295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.248 [2024-07-15 11:55:34.205304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.248 [2024-07-15 11:55:34.207808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.248 [2024-07-15 11:55:34.216838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.248 [2024-07-15 11:55:34.217352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.248 [2024-07-15 11:55:34.217411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.248 [2024-07-15 11:55:34.217444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.248 [2024-07-15 11:55:34.218054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.248 [2024-07-15 11:55:34.218427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.248 [2024-07-15 11:55:34.218439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.248 [2024-07-15 11:55:34.218448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.248 [2024-07-15 11:55:34.220944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.248 [2024-07-15 11:55:34.229495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.248 [2024-07-15 11:55:34.229942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.248 [2024-07-15 11:55:34.229994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.248 [2024-07-15 11:55:34.230029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.248 [2024-07-15 11:55:34.230428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.248 [2024-07-15 11:55:34.230586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.248 [2024-07-15 11:55:34.230598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.248 [2024-07-15 11:55:34.230607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.248 [2024-07-15 11:55:34.233158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.248 [2024-07-15 11:55:34.242361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.248 [2024-07-15 11:55:34.242879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.248 [2024-07-15 11:55:34.242931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.248 [2024-07-15 11:55:34.242965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.248 [2024-07-15 11:55:34.243430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.248 [2024-07-15 11:55:34.243588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.248 [2024-07-15 11:55:34.243599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.248 [2024-07-15 11:55:34.243607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.248 [2024-07-15 11:55:34.246205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.248 [2024-07-15 11:55:34.255125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.248 [2024-07-15 11:55:34.255622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.248 [2024-07-15 11:55:34.255673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.248 [2024-07-15 11:55:34.255707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.248 [2024-07-15 11:55:34.256263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.248 [2024-07-15 11:55:34.256436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.248 [2024-07-15 11:55:34.256448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.248 [2024-07-15 11:55:34.256457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.248 [2024-07-15 11:55:34.258951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.248 [2024-07-15 11:55:34.267907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.248 [2024-07-15 11:55:34.268395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.248 [2024-07-15 11:55:34.268413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.248 [2024-07-15 11:55:34.268423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.248 [2024-07-15 11:55:34.268580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.248 [2024-07-15 11:55:34.268737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.248 [2024-07-15 11:55:34.268748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.248 [2024-07-15 11:55:34.268757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.248 [2024-07-15 11:55:34.271299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.248 [2024-07-15 11:55:34.280600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.248 [2024-07-15 11:55:34.281114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.248 [2024-07-15 11:55:34.281167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.248 [2024-07-15 11:55:34.281200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.248 [2024-07-15 11:55:34.281571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.248 [2024-07-15 11:55:34.281730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.248 [2024-07-15 11:55:34.281741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.248 [2024-07-15 11:55:34.281750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.248 [2024-07-15 11:55:34.284297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.248 [2024-07-15 11:55:34.293379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.248 [2024-07-15 11:55:34.293895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.248 [2024-07-15 11:55:34.293946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.248 [2024-07-15 11:55:34.293979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.248 [2024-07-15 11:55:34.294417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.248 [2024-07-15 11:55:34.294576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.248 [2024-07-15 11:55:34.294587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.248 [2024-07-15 11:55:34.294596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.248 [2024-07-15 11:55:34.297335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.248 [2024-07-15 11:55:34.306075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.248 [2024-07-15 11:55:34.306594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.248 [2024-07-15 11:55:34.306649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.248 [2024-07-15 11:55:34.306683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.248 [2024-07-15 11:55:34.307288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.248 [2024-07-15 11:55:34.307456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.248 [2024-07-15 11:55:34.307467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.248 [2024-07-15 11:55:34.307477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.248 [2024-07-15 11:55:34.310062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.248 [2024-07-15 11:55:34.318865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.248 [2024-07-15 11:55:34.319374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.248 [2024-07-15 11:55:34.319392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.248 [2024-07-15 11:55:34.319401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.248 [2024-07-15 11:55:34.319558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.248 [2024-07-15 11:55:34.319715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.249 [2024-07-15 11:55:34.319726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.249 [2024-07-15 11:55:34.319734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.249 [2024-07-15 11:55:34.322284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.249 [2024-07-15 11:55:34.331554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.249 [2024-07-15 11:55:34.332056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.249 [2024-07-15 11:55:34.332096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.249 [2024-07-15 11:55:34.332131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.249 [2024-07-15 11:55:34.332645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.249 [2024-07-15 11:55:34.332803] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.249 [2024-07-15 11:55:34.332815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.249 [2024-07-15 11:55:34.332824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.249 [2024-07-15 11:55:34.335374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.249 [2024-07-15 11:55:34.344426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.249 [2024-07-15 11:55:34.344875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.249 [2024-07-15 11:55:34.344928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.249 [2024-07-15 11:55:34.344970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.249 [2024-07-15 11:55:34.345561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.249 [2024-07-15 11:55:34.345754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.249 [2024-07-15 11:55:34.345766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.249 [2024-07-15 11:55:34.345775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.508 [2024-07-15 11:55:34.349331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.508 [2024-07-15 11:55:34.357928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.508 [2024-07-15 11:55:34.358423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.508 [2024-07-15 11:55:34.358467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.508 [2024-07-15 11:55:34.358500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.508 [2024-07-15 11:55:34.359106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.508 [2024-07-15 11:55:34.359679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.508 [2024-07-15 11:55:34.359690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.508 [2024-07-15 11:55:34.359700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.508 [2024-07-15 11:55:34.362247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.508 [2024-07-15 11:55:34.370863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.508 [2024-07-15 11:55:34.371419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.508 [2024-07-15 11:55:34.371437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.508 [2024-07-15 11:55:34.371447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.508 [2024-07-15 11:55:34.371618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.508 [2024-07-15 11:55:34.371789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.508 [2024-07-15 11:55:34.371800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.508 [2024-07-15 11:55:34.371809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.509 [2024-07-15 11:55:34.374471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.509 [2024-07-15 11:55:34.383620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.509 [2024-07-15 11:55:34.384143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.509 [2024-07-15 11:55:34.384161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.509 [2024-07-15 11:55:34.384171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.509 [2024-07-15 11:55:34.384328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.509 [2024-07-15 11:55:34.384486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.509 [2024-07-15 11:55:34.384500] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.509 [2024-07-15 11:55:34.384508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.509 [2024-07-15 11:55:34.386995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.509 [2024-07-15 11:55:34.396342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.509 [2024-07-15 11:55:34.396862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.509 [2024-07-15 11:55:34.396915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.509 [2024-07-15 11:55:34.396949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.509 [2024-07-15 11:55:34.397546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.509 [2024-07-15 11:55:34.397705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.509 [2024-07-15 11:55:34.397716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.509 [2024-07-15 11:55:34.397725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.509 [2024-07-15 11:55:34.400278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.509 [2024-07-15 11:55:34.409057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.509 [2024-07-15 11:55:34.409556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.509 [2024-07-15 11:55:34.409608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.509 [2024-07-15 11:55:34.409641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.509 [2024-07-15 11:55:34.410249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.509 [2024-07-15 11:55:34.410857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.509 [2024-07-15 11:55:34.410892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.509 [2024-07-15 11:55:34.410922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.509 [2024-07-15 11:55:34.413391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.509 [2024-07-15 11:55:34.421764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.509 [2024-07-15 11:55:34.422289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.509 [2024-07-15 11:55:34.422342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.509 [2024-07-15 11:55:34.422375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.509 [2024-07-15 11:55:34.422826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.509 [2024-07-15 11:55:34.423012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.509 [2024-07-15 11:55:34.423024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.509 [2024-07-15 11:55:34.423033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.509 [2024-07-15 11:55:34.425553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.509 [2024-07-15 11:55:34.434443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.509 [2024-07-15 11:55:34.434875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.509 [2024-07-15 11:55:34.434893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.509 [2024-07-15 11:55:34.434903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.509 [2024-07-15 11:55:34.435059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.509 [2024-07-15 11:55:34.435216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.509 [2024-07-15 11:55:34.435227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.509 [2024-07-15 11:55:34.435235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.509 [2024-07-15 11:55:34.437783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.509 [2024-07-15 11:55:34.447168] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.509 [2024-07-15 11:55:34.447650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.509 [2024-07-15 11:55:34.447667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.509 [2024-07-15 11:55:34.447677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.509 [2024-07-15 11:55:34.447840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.509 [2024-07-15 11:55:34.448020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.509 [2024-07-15 11:55:34.448032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.509 [2024-07-15 11:55:34.448041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.509 [2024-07-15 11:55:34.450564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.509 [2024-07-15 11:55:34.459938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.509 [2024-07-15 11:55:34.460450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.509 [2024-07-15 11:55:34.460501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.509 [2024-07-15 11:55:34.460534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.509 [2024-07-15 11:55:34.461042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.509 [2024-07-15 11:55:34.461200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.509 [2024-07-15 11:55:34.461211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.509 [2024-07-15 11:55:34.461219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.509 [2024-07-15 11:55:34.463675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.509 [2024-07-15 11:55:34.472643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.509 [2024-07-15 11:55:34.473168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.509 [2024-07-15 11:55:34.473222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.509 [2024-07-15 11:55:34.473256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.509 [2024-07-15 11:55:34.473870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.509 [2024-07-15 11:55:34.474258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.509 [2024-07-15 11:55:34.474269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.509 [2024-07-15 11:55:34.474279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.509 [2024-07-15 11:55:34.476778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.509 [2024-07-15 11:55:34.485432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.509 [2024-07-15 11:55:34.485939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.509 [2024-07-15 11:55:34.485957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.509 [2024-07-15 11:55:34.485967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.509 [2024-07-15 11:55:34.486124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.509 [2024-07-15 11:55:34.486281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.509 [2024-07-15 11:55:34.486292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.509 [2024-07-15 11:55:34.486300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.509 [2024-07-15 11:55:34.488852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.509 [2024-07-15 11:55:34.498085] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.509 [2024-07-15 11:55:34.498605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.509 [2024-07-15 11:55:34.498656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.509 [2024-07-15 11:55:34.498689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.509 [2024-07-15 11:55:34.499186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.509 [2024-07-15 11:55:34.499354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.509 [2024-07-15 11:55:34.499365] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.509 [2024-07-15 11:55:34.499374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.509 [2024-07-15 11:55:34.501876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.509 [2024-07-15 11:55:34.510761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.509 [2024-07-15 11:55:34.511280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.509 [2024-07-15 11:55:34.511332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.509 [2024-07-15 11:55:34.511366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.509 [2024-07-15 11:55:34.511971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.509 [2024-07-15 11:55:34.512265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.509 [2024-07-15 11:55:34.512276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.509 [2024-07-15 11:55:34.512288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.509 [2024-07-15 11:55:34.514796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.509 [2024-07-15 11:55:34.523454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.509 [2024-07-15 11:55:34.523971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.509 [2024-07-15 11:55:34.524023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.509 [2024-07-15 11:55:34.524056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.510 [2024-07-15 11:55:34.524428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.510 [2024-07-15 11:55:34.524586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.510 [2024-07-15 11:55:34.524597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.510 [2024-07-15 11:55:34.524606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.510 [2024-07-15 11:55:34.527155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.510 [2024-07-15 11:55:34.536138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.510 [2024-07-15 11:55:34.536653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.510 [2024-07-15 11:55:34.536706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.510 [2024-07-15 11:55:34.536739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.510 [2024-07-15 11:55:34.537203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.510 [2024-07-15 11:55:34.537370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.510 [2024-07-15 11:55:34.537381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.510 [2024-07-15 11:55:34.537390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.510 [2024-07-15 11:55:34.539895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.510 [2024-07-15 11:55:34.548907] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.510 [2024-07-15 11:55:34.549426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.510 [2024-07-15 11:55:34.549476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.510 [2024-07-15 11:55:34.549509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.510 [2024-07-15 11:55:34.549977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.510 [2024-07-15 11:55:34.550144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.510 [2024-07-15 11:55:34.550156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.510 [2024-07-15 11:55:34.550165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.510 [2024-07-15 11:55:34.552772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.510 [2024-07-15 11:55:34.561714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.510 [2024-07-15 11:55:34.562237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.510 [2024-07-15 11:55:34.562289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.510 [2024-07-15 11:55:34.562322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.510 [2024-07-15 11:55:34.562931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.510 [2024-07-15 11:55:34.563098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.510 [2024-07-15 11:55:34.563109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.510 [2024-07-15 11:55:34.563118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.510 [2024-07-15 11:55:34.565634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.510 [2024-07-15 11:55:34.574456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.510 [2024-07-15 11:55:34.574967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.510 [2024-07-15 11:55:34.574985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.510 [2024-07-15 11:55:34.574995] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.510 [2024-07-15 11:55:34.575153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.510 [2024-07-15 11:55:34.575311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.510 [2024-07-15 11:55:34.575321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.510 [2024-07-15 11:55:34.575330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.510 [2024-07-15 11:55:34.577823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.510 [2024-07-15 11:55:34.587125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.510 [2024-07-15 11:55:34.587641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.510 [2024-07-15 11:55:34.587694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.510 [2024-07-15 11:55:34.587727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.510 [2024-07-15 11:55:34.588220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.510 [2024-07-15 11:55:34.588388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.510 [2024-07-15 11:55:34.588400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.510 [2024-07-15 11:55:34.588409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.510 [2024-07-15 11:55:34.590914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.510 [2024-07-15 11:55:34.599791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.510 [2024-07-15 11:55:34.600241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.510 [2024-07-15 11:55:34.600296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.510 [2024-07-15 11:55:34.600330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.510 [2024-07-15 11:55:34.600946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.510 [2024-07-15 11:55:34.601363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.510 [2024-07-15 11:55:34.601375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.510 [2024-07-15 11:55:34.601384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.510 [2024-07-15 11:55:34.603931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.510 [2024-07-15 11:55:34.612678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.510 [2024-07-15 11:55:34.613184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.510 [2024-07-15 11:55:34.613203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.510 [2024-07-15 11:55:34.613214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.769 [2024-07-15 11:55:34.613384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.769 [2024-07-15 11:55:34.613555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.769 [2024-07-15 11:55:34.613566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.769 [2024-07-15 11:55:34.613575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.769 [2024-07-15 11:55:34.616166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.769 [2024-07-15 11:55:34.625373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.769 [2024-07-15 11:55:34.625751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.769 [2024-07-15 11:55:34.625768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.769 [2024-07-15 11:55:34.625778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.769 [2024-07-15 11:55:34.625960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.769 [2024-07-15 11:55:34.626126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.769 [2024-07-15 11:55:34.626137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.769 [2024-07-15 11:55:34.626146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.769 [2024-07-15 11:55:34.628788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.769 [2024-07-15 11:55:34.638193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.769 [2024-07-15 11:55:34.638728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.769 [2024-07-15 11:55:34.638779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.769 [2024-07-15 11:55:34.638812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.769 [2024-07-15 11:55:34.639254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.769 [2024-07-15 11:55:34.639412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.769 [2024-07-15 11:55:34.639423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.769 [2024-07-15 11:55:34.639435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.769 [2024-07-15 11:55:34.641910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.769 [2024-07-15 11:55:34.650919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.769 [2024-07-15 11:55:34.651445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.769 [2024-07-15 11:55:34.651496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.769 [2024-07-15 11:55:34.651529] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.769 [2024-07-15 11:55:34.652132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.769 [2024-07-15 11:55:34.652686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.769 [2024-07-15 11:55:34.652697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.769 [2024-07-15 11:55:34.652705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.769 [2024-07-15 11:55:34.655194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.769 [2024-07-15 11:55:34.663679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.769 [2024-07-15 11:55:34.664196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.769 [2024-07-15 11:55:34.664248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.769 [2024-07-15 11:55:34.664281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.769 [2024-07-15 11:55:34.664623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.769 [2024-07-15 11:55:34.664781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.769 [2024-07-15 11:55:34.664792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.769 [2024-07-15 11:55:34.664801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.769 [2024-07-15 11:55:34.667350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.769 [2024-07-15 11:55:34.676418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.769 [2024-07-15 11:55:34.676930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.769 [2024-07-15 11:55:34.676983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.769 [2024-07-15 11:55:34.677016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.769 [2024-07-15 11:55:34.677458] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.769 [2024-07-15 11:55:34.677617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.769 [2024-07-15 11:55:34.677627] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.769 [2024-07-15 11:55:34.677636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.769 [2024-07-15 11:55:34.680121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.769 [2024-07-15 11:55:34.689119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.769 [2024-07-15 11:55:34.689638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.769 [2024-07-15 11:55:34.689699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.769 [2024-07-15 11:55:34.689733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.769 [2024-07-15 11:55:34.690341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.769 [2024-07-15 11:55:34.690838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.769 [2024-07-15 11:55:34.690854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.769 [2024-07-15 11:55:34.690867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.769 [2024-07-15 11:55:34.694607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.769 [2024-07-15 11:55:34.702292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.769 [2024-07-15 11:55:34.702810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.769 [2024-07-15 11:55:34.702827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.769 [2024-07-15 11:55:34.702842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.769 [2024-07-15 11:55:34.703022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.769 [2024-07-15 11:55:34.703188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.769 [2024-07-15 11:55:34.703199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.769 [2024-07-15 11:55:34.703208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.769 [2024-07-15 11:55:34.705713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.769 [2024-07-15 11:55:34.715028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.769 [2024-07-15 11:55:34.715532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.770 [2024-07-15 11:55:34.715550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.770 [2024-07-15 11:55:34.715559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.770 [2024-07-15 11:55:34.715716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.770 [2024-07-15 11:55:34.715878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.770 [2024-07-15 11:55:34.715906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.770 [2024-07-15 11:55:34.715916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.770 [2024-07-15 11:55:34.718440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.770 [2024-07-15 11:55:34.727804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.770 [2024-07-15 11:55:34.728325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.770 [2024-07-15 11:55:34.728378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.770 [2024-07-15 11:55:34.728411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.770 [2024-07-15 11:55:34.728768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.770 [2024-07-15 11:55:34.728954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.770 [2024-07-15 11:55:34.728966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.770 [2024-07-15 11:55:34.728977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.770 [2024-07-15 11:55:34.731492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.770 [2024-07-15 11:55:34.740699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.770 [2024-07-15 11:55:34.741158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.770 [2024-07-15 11:55:34.741177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.770 [2024-07-15 11:55:34.741187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.770 [2024-07-15 11:55:34.741357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.770 [2024-07-15 11:55:34.741527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.770 [2024-07-15 11:55:34.741538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.770 [2024-07-15 11:55:34.741547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.770 [2024-07-15 11:55:34.744224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.770 [2024-07-15 11:55:34.753708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.770 [2024-07-15 11:55:34.754234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.770 [2024-07-15 11:55:34.754253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.770 [2024-07-15 11:55:34.754263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.770 [2024-07-15 11:55:34.754432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.770 [2024-07-15 11:55:34.754603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.770 [2024-07-15 11:55:34.754614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.770 [2024-07-15 11:55:34.754624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.770 [2024-07-15 11:55:34.757292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.770 [2024-07-15 11:55:34.766582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.770 [2024-07-15 11:55:34.767103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.770 [2024-07-15 11:55:34.767121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.770 [2024-07-15 11:55:34.767131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.770 [2024-07-15 11:55:34.767302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.770 [2024-07-15 11:55:34.767472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.770 [2024-07-15 11:55:34.767484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.770 [2024-07-15 11:55:34.767493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.770 [2024-07-15 11:55:34.770171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.770 [2024-07-15 11:55:34.779471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.770 [2024-07-15 11:55:34.780000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.770 [2024-07-15 11:55:34.780018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.770 [2024-07-15 11:55:34.780029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.770 [2024-07-15 11:55:34.780199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.770 [2024-07-15 11:55:34.780370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.770 [2024-07-15 11:55:34.780382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.770 [2024-07-15 11:55:34.780392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.770 [2024-07-15 11:55:34.783067] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.770 [2024-07-15 11:55:34.792372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.770 [2024-07-15 11:55:34.792889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.770 [2024-07-15 11:55:34.792908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.770 [2024-07-15 11:55:34.792919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.770 [2024-07-15 11:55:34.793088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.770 [2024-07-15 11:55:34.793259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.770 [2024-07-15 11:55:34.793271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.770 [2024-07-15 11:55:34.793280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.770 [2024-07-15 11:55:34.795955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2130585 Killed "${NVMF_APP[@]}" "$@" 00:29:06.770 11:55:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:06.770 11:55:34 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:06.770 11:55:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:06.770 11:55:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:06.770 11:55:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.770 [2024-07-15 11:55:34.805247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.770 [2024-07-15 11:55:34.805737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.770 [2024-07-15 11:55:34.805756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.770 [2024-07-15 11:55:34.805767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.770 [2024-07-15 11:55:34.805942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.770 [2024-07-15 11:55:34.806114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.770 [2024-07-15 11:55:34.806125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.770 [2024-07-15 11:55:34.806134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.770 [2024-07-15 11:55:34.808805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.770 11:55:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2131960 00:29:06.770 11:55:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2131960 00:29:06.770 11:55:34 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:06.770 11:55:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2131960 ']' 00:29:06.770 11:55:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.770 11:55:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:06.770 11:55:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.770 11:55:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:06.770 11:55:34 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.770 [2024-07-15 11:55:34.818309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.770 [2024-07-15 11:55:34.818843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.770 [2024-07-15 11:55:34.818862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.770 [2024-07-15 11:55:34.818873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.770 [2024-07-15 11:55:34.819044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.770 [2024-07-15 11:55:34.819215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.770 [2024-07-15 11:55:34.819227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.770 [2024-07-15 11:55:34.819236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.770 [2024-07-15 11:55:34.821914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.770 [2024-07-15 11:55:34.831243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.770 [2024-07-15 11:55:34.831753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.770 [2024-07-15 11:55:34.831772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.770 [2024-07-15 11:55:34.831783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.771 [2024-07-15 11:55:34.831960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.771 [2024-07-15 11:55:34.832131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.771 [2024-07-15 11:55:34.832143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.771 [2024-07-15 11:55:34.832152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.771 [2024-07-15 11:55:34.834827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.771 [2024-07-15 11:55:34.844141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.771 [2024-07-15 11:55:34.844635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.771 [2024-07-15 11:55:34.844654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.771 [2024-07-15 11:55:34.844664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.771 [2024-07-15 11:55:34.844850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.771 [2024-07-15 11:55:34.845021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.771 [2024-07-15 11:55:34.845033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.771 [2024-07-15 11:55:34.845042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.771 [2024-07-15 11:55:34.847688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.771 [2024-07-15 11:55:34.857058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.771 [2024-07-15 11:55:34.857522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.771 [2024-07-15 11:55:34.857541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.771 [2024-07-15 11:55:34.857551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.771 [2024-07-15 11:55:34.857716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.771 [2024-07-15 11:55:34.857888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.771 [2024-07-15 11:55:34.857899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.771 [2024-07-15 11:55:34.857909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:06.771 [2024-07-15 11:55:34.858902] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:29:06.771 [2024-07-15 11:55:34.858949] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.771 [2024-07-15 11:55:34.860542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:06.771 [2024-07-15 11:55:34.869971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:06.771 [2024-07-15 11:55:34.870367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.771 [2024-07-15 11:55:34.870387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:06.771 [2024-07-15 11:55:34.870397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:06.771 [2024-07-15 11:55:34.870568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:06.771 [2024-07-15 11:55:34.870739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:06.771 [2024-07-15 11:55:34.870751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:06.771 [2024-07-15 11:55:34.870760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.032 [2024-07-15 11:55:34.873435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.032 [2024-07-15 11:55:34.882900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.032 [2024-07-15 11:55:34.883360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.032 [2024-07-15 11:55:34.883379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.032 [2024-07-15 11:55:34.883389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.032 [2024-07-15 11:55:34.883565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.032 [2024-07-15 11:55:34.883735] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.032 [2024-07-15 11:55:34.883747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.032 [2024-07-15 11:55:34.883757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.032 [2024-07-15 11:55:34.886430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.032 [2024-07-15 11:55:34.895909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.032 [2024-07-15 11:55:34.896358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.032 [2024-07-15 11:55:34.896378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.032 [2024-07-15 11:55:34.896388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.032 [2024-07-15 11:55:34.896554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.032 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.032 [2024-07-15 11:55:34.896720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.032 [2024-07-15 11:55:34.896731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.032 [2024-07-15 11:55:34.896740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.032 [2024-07-15 11:55:34.899371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.032 [2024-07-15 11:55:34.908853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.032 [2024-07-15 11:55:34.909331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.032 [2024-07-15 11:55:34.909351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.032 [2024-07-15 11:55:34.909361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.032 [2024-07-15 11:55:34.909532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.032 [2024-07-15 11:55:34.909703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.032 [2024-07-15 11:55:34.909715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.033 [2024-07-15 11:55:34.909725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.033 [2024-07-15 11:55:34.912402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.033 [2024-07-15 11:55:34.921776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.033 [2024-07-15 11:55:34.922319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.033 [2024-07-15 11:55:34.922339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.033 [2024-07-15 11:55:34.922349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.033 [2024-07-15 11:55:34.922521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.033 [2024-07-15 11:55:34.922692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.033 [2024-07-15 11:55:34.922704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.033 [2024-07-15 11:55:34.922717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.033 [2024-07-15 11:55:34.925340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.033 [2024-07-15 11:55:34.934719] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.033 [2024-07-15 11:55:34.935189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.033 [2024-07-15 11:55:34.935208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.033 [2024-07-15 11:55:34.935218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.033 [2024-07-15 11:55:34.935389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.033 [2024-07-15 11:55:34.935560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.033 [2024-07-15 11:55:34.935571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.033 [2024-07-15 11:55:34.935580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.033 [2024-07-15 11:55:34.935609] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:07.033 [2024-07-15 11:55:34.938230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.033 [2024-07-15 11:55:34.947573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.033 [2024-07-15 11:55:34.948048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.033 [2024-07-15 11:55:34.948069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.033 [2024-07-15 11:55:34.948079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.033 [2024-07-15 11:55:34.948247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.033 [2024-07-15 11:55:34.948413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.033 [2024-07-15 11:55:34.948425] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.033 [2024-07-15 11:55:34.948436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.033 [2024-07-15 11:55:34.951063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.033 [2024-07-15 11:55:34.960459] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.033 [2024-07-15 11:55:34.960961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.033 [2024-07-15 11:55:34.960979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.033 [2024-07-15 11:55:34.960990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.033 [2024-07-15 11:55:34.961156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.033 [2024-07-15 11:55:34.961323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.033 [2024-07-15 11:55:34.961335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.033 [2024-07-15 11:55:34.961344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.033 [2024-07-15 11:55:34.964000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.033 [2024-07-15 11:55:34.973338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.033 [2024-07-15 11:55:34.973875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.033 [2024-07-15 11:55:34.973894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.033 [2024-07-15 11:55:34.973905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.033 [2024-07-15 11:55:34.974071] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.033 [2024-07-15 11:55:34.974238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.033 [2024-07-15 11:55:34.974249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.033 [2024-07-15 11:55:34.974260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.033 [2024-07-15 11:55:34.976925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.033 [2024-07-15 11:55:34.986239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.033 [2024-07-15 11:55:34.986770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.033 [2024-07-15 11:55:34.986791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.033 [2024-07-15 11:55:34.986801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.033 [2024-07-15 11:55:34.986980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.033 [2024-07-15 11:55:34.987151] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.033 [2024-07-15 11:55:34.987163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.033 [2024-07-15 11:55:34.987173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.033 [2024-07-15 11:55:34.989844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.033 [2024-07-15 11:55:34.999148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.033 [2024-07-15 11:55:34.999702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.033 [2024-07-15 11:55:34.999721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.033 [2024-07-15 11:55:34.999732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.033 [2024-07-15 11:55:34.999909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.033 [2024-07-15 11:55:35.000080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.033 [2024-07-15 11:55:35.000092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.033 [2024-07-15 11:55:35.000101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.033 [2024-07-15 11:55:35.002770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.033 [2024-07-15 11:55:35.010210] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.033 [2024-07-15 11:55:35.010238] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.033 [2024-07-15 11:55:35.010249] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.033 [2024-07-15 11:55:35.010259] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.033 [2024-07-15 11:55:35.010271] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.033 [2024-07-15 11:55:35.010312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:07.033 [2024-07-15 11:55:35.010396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:07.034 [2024-07-15 11:55:35.010398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.034 [2024-07-15 11:55:35.012075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.034 [2024-07-15 11:55:35.012558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.034 [2024-07-15 11:55:35.012577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.034 [2024-07-15 11:55:35.012588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.034 [2024-07-15 11:55:35.012761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.034 [2024-07-15 11:55:35.012938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.034 [2024-07-15 11:55:35.012949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.034 [2024-07-15 11:55:35.012959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.034 [2024-07-15 11:55:35.015634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.034 [2024-07-15 11:55:35.024985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.034 [2024-07-15 11:55:35.025297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.034 [2024-07-15 11:55:35.025317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.034 [2024-07-15 11:55:35.025328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.034 [2024-07-15 11:55:35.025500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.034 [2024-07-15 11:55:35.025671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.034 [2024-07-15 11:55:35.025683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.034 [2024-07-15 11:55:35.025693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.034 [2024-07-15 11:55:35.028371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.034 [2024-07-15 11:55:35.038018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.034 [2024-07-15 11:55:35.038494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.034 [2024-07-15 11:55:35.038514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.034 [2024-07-15 11:55:35.038525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.034 [2024-07-15 11:55:35.038695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.034 [2024-07-15 11:55:35.038873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.034 [2024-07-15 11:55:35.038885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.034 [2024-07-15 11:55:35.038895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.034 [2024-07-15 11:55:35.041562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.034 [2024-07-15 11:55:35.051035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.034 [2024-07-15 11:55:35.051611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.034 [2024-07-15 11:55:35.051630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.034 [2024-07-15 11:55:35.051641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.034 [2024-07-15 11:55:35.051812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.034 [2024-07-15 11:55:35.051989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.034 [2024-07-15 11:55:35.052001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.034 [2024-07-15 11:55:35.052010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.034 [2024-07-15 11:55:35.054686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.034 [2024-07-15 11:55:35.064003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.034 [2024-07-15 11:55:35.064404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.034 [2024-07-15 11:55:35.064425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.034 [2024-07-15 11:55:35.064436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.034 [2024-07-15 11:55:35.064608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.034 [2024-07-15 11:55:35.064779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.034 [2024-07-15 11:55:35.064791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.034 [2024-07-15 11:55:35.064801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.034 [2024-07-15 11:55:35.067480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.034 [2024-07-15 11:55:35.076942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.034 [2024-07-15 11:55:35.077406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.034 [2024-07-15 11:55:35.077425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.034 [2024-07-15 11:55:35.077440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.034 [2024-07-15 11:55:35.077620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.034 [2024-07-15 11:55:35.077792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.034 [2024-07-15 11:55:35.077804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.034 [2024-07-15 11:55:35.077815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.034 [2024-07-15 11:55:35.080489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.034 [2024-07-15 11:55:35.089950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.034 [2024-07-15 11:55:35.090472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.034 [2024-07-15 11:55:35.090490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.034 [2024-07-15 11:55:35.090501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.034 [2024-07-15 11:55:35.090678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.034 [2024-07-15 11:55:35.090855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.034 [2024-07-15 11:55:35.090867] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.034 [2024-07-15 11:55:35.090877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.034 [2024-07-15 11:55:35.093550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.034 [2024-07-15 11:55:35.102919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.034 [2024-07-15 11:55:35.103423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.034 [2024-07-15 11:55:35.103444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.034 [2024-07-15 11:55:35.103454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.034 [2024-07-15 11:55:35.103625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.034 [2024-07-15 11:55:35.103796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.034 [2024-07-15 11:55:35.103808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.034 [2024-07-15 11:55:35.103818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.034 [2024-07-15 11:55:35.106497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.035 [2024-07-15 11:55:35.115941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.035 [2024-07-15 11:55:35.116385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.035 [2024-07-15 11:55:35.116404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.035 [2024-07-15 11:55:35.116415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.035 [2024-07-15 11:55:35.116585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.035 [2024-07-15 11:55:35.116757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.035 [2024-07-15 11:55:35.116774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.035 [2024-07-15 11:55:35.116787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.035 [2024-07-15 11:55:35.119464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.035 [2024-07-15 11:55:35.128927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.035 [2024-07-15 11:55:35.129443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.035 [2024-07-15 11:55:35.129462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.035 [2024-07-15 11:55:35.129473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.035 [2024-07-15 11:55:35.129643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.035 [2024-07-15 11:55:35.129814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.035 [2024-07-15 11:55:35.129826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.035 [2024-07-15 11:55:35.129844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.035 [2024-07-15 11:55:35.132518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.323 [2024-07-15 11:55:35.141861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.323 [2024-07-15 11:55:35.142294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.323 [2024-07-15 11:55:35.142312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.323 [2024-07-15 11:55:35.142323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.323 [2024-07-15 11:55:35.142492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.323 [2024-07-15 11:55:35.142663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.323 [2024-07-15 11:55:35.142675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.323 [2024-07-15 11:55:35.142685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.323 [2024-07-15 11:55:35.145374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.323 [2024-07-15 11:55:35.154854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.323 [2024-07-15 11:55:35.155233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.323 [2024-07-15 11:55:35.155252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.323 [2024-07-15 11:55:35.155263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.323 [2024-07-15 11:55:35.155433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.323 [2024-07-15 11:55:35.155604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.323 [2024-07-15 11:55:35.155616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.323 [2024-07-15 11:55:35.155627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.323 [2024-07-15 11:55:35.158301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.323 [2024-07-15 11:55:35.167777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.323 [2024-07-15 11:55:35.168281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.323 [2024-07-15 11:55:35.168300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.323 [2024-07-15 11:55:35.168311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.323 [2024-07-15 11:55:35.168482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.323 [2024-07-15 11:55:35.168653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.323 [2024-07-15 11:55:35.168665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.323 [2024-07-15 11:55:35.168674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.323 [2024-07-15 11:55:35.171353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.323 [2024-07-15 11:55:35.180667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.323 [2024-07-15 11:55:35.181186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.323 [2024-07-15 11:55:35.181205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.323 [2024-07-15 11:55:35.181215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.323 [2024-07-15 11:55:35.181386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.323 [2024-07-15 11:55:35.181556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.323 [2024-07-15 11:55:35.181568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.323 [2024-07-15 11:55:35.181578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.323 [2024-07-15 11:55:35.184253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.323 [2024-07-15 11:55:35.193547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.323 [2024-07-15 11:55:35.194003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.323 [2024-07-15 11:55:35.194022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.323 [2024-07-15 11:55:35.194033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.323 [2024-07-15 11:55:35.194203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.323 [2024-07-15 11:55:35.194373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.323 [2024-07-15 11:55:35.194385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.323 [2024-07-15 11:55:35.194394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.323 [2024-07-15 11:55:35.197068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.323 [2024-07-15 11:55:35.206542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.323 [2024-07-15 11:55:35.207049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.323 [2024-07-15 11:55:35.207069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.323 [2024-07-15 11:55:35.207079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.323 [2024-07-15 11:55:35.207250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.323 [2024-07-15 11:55:35.207422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.323 [2024-07-15 11:55:35.207434] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.323 [2024-07-15 11:55:35.207443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.323 [2024-07-15 11:55:35.210120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.323 [2024-07-15 11:55:35.219436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.323 [2024-07-15 11:55:35.219956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.323 [2024-07-15 11:55:35.219975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.323 [2024-07-15 11:55:35.219985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.323 [2024-07-15 11:55:35.220156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.323 [2024-07-15 11:55:35.220331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.323 [2024-07-15 11:55:35.220342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.323 [2024-07-15 11:55:35.220351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.323 [2024-07-15 11:55:35.223020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.324 [2024-07-15 11:55:35.232330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.324 [2024-07-15 11:55:35.232846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.324 [2024-07-15 11:55:35.232865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.324 [2024-07-15 11:55:35.232876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.324 [2024-07-15 11:55:35.233046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.324 [2024-07-15 11:55:35.233216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.324 [2024-07-15 11:55:35.233228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.324 [2024-07-15 11:55:35.233238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.324 [2024-07-15 11:55:35.235912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.324 [2024-07-15 11:55:35.245215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.324 [2024-07-15 11:55:35.245699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.324 [2024-07-15 11:55:35.245717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.324 [2024-07-15 11:55:35.245727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.324 [2024-07-15 11:55:35.245903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.324 [2024-07-15 11:55:35.246074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.324 [2024-07-15 11:55:35.246086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.324 [2024-07-15 11:55:35.246095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.324 [2024-07-15 11:55:35.248761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.324 [2024-07-15 11:55:35.258219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.324 [2024-07-15 11:55:35.258735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.324 [2024-07-15 11:55:35.258754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.324 [2024-07-15 11:55:35.258764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.324 [2024-07-15 11:55:35.258940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.324 [2024-07-15 11:55:35.259111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.324 [2024-07-15 11:55:35.259123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.324 [2024-07-15 11:55:35.259132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.324 [2024-07-15 11:55:35.261810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.324 [2024-07-15 11:55:35.271135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.324 [2024-07-15 11:55:35.271631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.324 [2024-07-15 11:55:35.271650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.324 [2024-07-15 11:55:35.271660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.324 [2024-07-15 11:55:35.271836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.324 [2024-07-15 11:55:35.272008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.324 [2024-07-15 11:55:35.272020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.324 [2024-07-15 11:55:35.272029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.324 [2024-07-15 11:55:35.274698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.324 [2024-07-15 11:55:35.284107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.324 [2024-07-15 11:55:35.284553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.324 [2024-07-15 11:55:35.284572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.324 [2024-07-15 11:55:35.284582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.324 [2024-07-15 11:55:35.284753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.324 [2024-07-15 11:55:35.284929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.324 [2024-07-15 11:55:35.284941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.324 [2024-07-15 11:55:35.284950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.324 [2024-07-15 11:55:35.287619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.324 [2024-07-15 11:55:35.297300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.324 [2024-07-15 11:55:35.297822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.324 [2024-07-15 11:55:35.297846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.324 [2024-07-15 11:55:35.297857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.324 [2024-07-15 11:55:35.298028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.324 [2024-07-15 11:55:35.298198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.324 [2024-07-15 11:55:35.298210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.324 [2024-07-15 11:55:35.298220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.324 [2024-07-15 11:55:35.300891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.324 [2024-07-15 11:55:35.310217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.324 [2024-07-15 11:55:35.310684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.324 [2024-07-15 11:55:35.310703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.324 [2024-07-15 11:55:35.310717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.324 [2024-07-15 11:55:35.310893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.324 [2024-07-15 11:55:35.311063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.324 [2024-07-15 11:55:35.311075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.324 [2024-07-15 11:55:35.311084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.324 [2024-07-15 11:55:35.313758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.324 [2024-07-15 11:55:35.323237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.324 [2024-07-15 11:55:35.323764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.324 [2024-07-15 11:55:35.323782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.324 [2024-07-15 11:55:35.323793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.324 [2024-07-15 11:55:35.323968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.324 [2024-07-15 11:55:35.324138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.324 [2024-07-15 11:55:35.324150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.324 [2024-07-15 11:55:35.324159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.324 [2024-07-15 11:55:35.326850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.324 [2024-07-15 11:55:35.336144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.324 [2024-07-15 11:55:35.336573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.324 [2024-07-15 11:55:35.336592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.324 [2024-07-15 11:55:35.336602] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.324 [2024-07-15 11:55:35.336772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.324 [2024-07-15 11:55:35.336948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.324 [2024-07-15 11:55:35.336960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.324 [2024-07-15 11:55:35.336969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.324 [2024-07-15 11:55:35.339637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.324 [2024-07-15 11:55:35.349107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.324 [2024-07-15 11:55:35.349620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.324 [2024-07-15 11:55:35.349639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.324 [2024-07-15 11:55:35.349649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.324 [2024-07-15 11:55:35.349818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.324 [2024-07-15 11:55:35.349996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.324 [2024-07-15 11:55:35.350007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.324 [2024-07-15 11:55:35.350016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.324 [2024-07-15 11:55:35.352685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.324 [2024-07-15 11:55:35.362000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.324 [2024-07-15 11:55:35.362446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.324 [2024-07-15 11:55:35.362465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.324 [2024-07-15 11:55:35.362475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.324 [2024-07-15 11:55:35.362645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.324 [2024-07-15 11:55:35.362816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.324 [2024-07-15 11:55:35.362828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.324 [2024-07-15 11:55:35.362842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.325 [2024-07-15 11:55:35.365513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.325 [2024-07-15 11:55:35.374993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.325 [2024-07-15 11:55:35.375426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.325 [2024-07-15 11:55:35.375445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.325 [2024-07-15 11:55:35.375455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.325 [2024-07-15 11:55:35.375625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.325 [2024-07-15 11:55:35.375796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.325 [2024-07-15 11:55:35.375808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.325 [2024-07-15 11:55:35.375817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.325 [2024-07-15 11:55:35.378493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.325 [2024-07-15 11:55:35.387960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.325 [2024-07-15 11:55:35.388385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.325 [2024-07-15 11:55:35.388404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.325 [2024-07-15 11:55:35.388414] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.325 [2024-07-15 11:55:35.388585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.325 [2024-07-15 11:55:35.388756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.325 [2024-07-15 11:55:35.388767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.325 [2024-07-15 11:55:35.388777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.325 [2024-07-15 11:55:35.391453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.325 [2024-07-15 11:55:35.400900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.325 [2024-07-15 11:55:35.401405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.325 [2024-07-15 11:55:35.401423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.325 [2024-07-15 11:55:35.401433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.325 [2024-07-15 11:55:35.401603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.325 [2024-07-15 11:55:35.401775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.325 [2024-07-15 11:55:35.401788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.325 [2024-07-15 11:55:35.401797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.325 [2024-07-15 11:55:35.404469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.325 [2024-07-15 11:55:35.413926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.325 [2024-07-15 11:55:35.414423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.325 [2024-07-15 11:55:35.414442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.325 [2024-07-15 11:55:35.414452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.325 [2024-07-15 11:55:35.414621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.325 [2024-07-15 11:55:35.414792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.325 [2024-07-15 11:55:35.414804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.325 [2024-07-15 11:55:35.414813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.325 [2024-07-15 11:55:35.417491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.325 [2024-07-15 11:55:35.426942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.325 [2024-07-15 11:55:35.427370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.325 [2024-07-15 11:55:35.427389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.325 [2024-07-15 11:55:35.427399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.585 [2024-07-15 11:55:35.427568] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.585 [2024-07-15 11:55:35.427739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.585 [2024-07-15 11:55:35.427751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.585 [2024-07-15 11:55:35.427760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.585 [2024-07-15 11:55:35.430434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.585 [2024-07-15 11:55:35.439882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.585 [2024-07-15 11:55:35.440400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-15 11:55:35.440418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.585 [2024-07-15 11:55:35.440431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.585 [2024-07-15 11:55:35.440601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.585 [2024-07-15 11:55:35.440770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.585 [2024-07-15 11:55:35.440781] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.585 [2024-07-15 11:55:35.440790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.585 [2024-07-15 11:55:35.443463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.585 [2024-07-15 11:55:35.452761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.585 [2024-07-15 11:55:35.453284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-15 11:55:35.453302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.585 [2024-07-15 11:55:35.453312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.585 [2024-07-15 11:55:35.453482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.585 [2024-07-15 11:55:35.453652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.585 [2024-07-15 11:55:35.453663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.585 [2024-07-15 11:55:35.453673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.585 [2024-07-15 11:55:35.456345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.585 [2024-07-15 11:55:35.465639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.585 [2024-07-15 11:55:35.466159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-15 11:55:35.466178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.585 [2024-07-15 11:55:35.466187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.585 [2024-07-15 11:55:35.466357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.585 [2024-07-15 11:55:35.466527] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.585 [2024-07-15 11:55:35.466538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.585 [2024-07-15 11:55:35.466547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.585 [2024-07-15 11:55:35.469224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.585 [2024-07-15 11:55:35.478522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.585 [2024-07-15 11:55:35.479039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-15 11:55:35.479057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.585 [2024-07-15 11:55:35.479068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.585 [2024-07-15 11:55:35.479237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.585 [2024-07-15 11:55:35.479407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.585 [2024-07-15 11:55:35.479421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.585 [2024-07-15 11:55:35.479431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.585 [2024-07-15 11:55:35.482103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.585 [2024-07-15 11:55:35.491410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.585 [2024-07-15 11:55:35.491938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-15 11:55:35.491957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.585 [2024-07-15 11:55:35.491967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.585 [2024-07-15 11:55:35.492138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.585 [2024-07-15 11:55:35.492310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.585 [2024-07-15 11:55:35.492321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.585 [2024-07-15 11:55:35.492330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.585 [2024-07-15 11:55:35.495001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.585 [2024-07-15 11:55:35.504298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.585 [2024-07-15 11:55:35.504816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-15 11:55:35.504839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.585 [2024-07-15 11:55:35.504849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.585 [2024-07-15 11:55:35.505019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.585 [2024-07-15 11:55:35.505189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.585 [2024-07-15 11:55:35.505199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.585 [2024-07-15 11:55:35.505208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.585 [2024-07-15 11:55:35.507884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.585 [2024-07-15 11:55:35.517217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.585 [2024-07-15 11:55:35.517725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-15 11:55:35.517744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.585 [2024-07-15 11:55:35.517754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.585 [2024-07-15 11:55:35.517929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.585 [2024-07-15 11:55:35.518100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.585 [2024-07-15 11:55:35.518111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.585 [2024-07-15 11:55:35.518120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.585 [2024-07-15 11:55:35.520791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.585 [2024-07-15 11:55:35.530097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.585 [2024-07-15 11:55:35.530620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-15 11:55:35.530637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.585 [2024-07-15 11:55:35.530647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.585 [2024-07-15 11:55:35.530817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.585 [2024-07-15 11:55:35.530993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.585 [2024-07-15 11:55:35.531004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.585 [2024-07-15 11:55:35.531014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.585 [2024-07-15 11:55:35.533686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.585 [2024-07-15 11:55:35.542986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.585 [2024-07-15 11:55:35.543509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-15 11:55:35.543526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.585 [2024-07-15 11:55:35.543536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.585 [2024-07-15 11:55:35.543706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.585 [2024-07-15 11:55:35.543879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.585 [2024-07-15 11:55:35.543890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.585 [2024-07-15 11:55:35.543899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.585 [2024-07-15 11:55:35.546573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.585 [2024-07-15 11:55:35.555871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.585 [2024-07-15 11:55:35.556395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.585 [2024-07-15 11:55:35.556413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.585 [2024-07-15 11:55:35.556423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.585 [2024-07-15 11:55:35.556592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.585 [2024-07-15 11:55:35.556762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.585 [2024-07-15 11:55:35.556773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.585 [2024-07-15 11:55:35.556783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.585 [2024-07-15 11:55:35.559455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.585 [2024-07-15 11:55:35.568759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.585 [2024-07-15 11:55:35.569213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-15 11:55:35.569231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.586 [2024-07-15 11:55:35.569241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.586 [2024-07-15 11:55:35.569417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.586 [2024-07-15 11:55:35.569588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.586 [2024-07-15 11:55:35.569598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.586 [2024-07-15 11:55:35.569608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.586 [2024-07-15 11:55:35.572283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.586 [2024-07-15 11:55:35.581764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.586 [2024-07-15 11:55:35.582267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-15 11:55:35.582285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.586 [2024-07-15 11:55:35.582296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.586 [2024-07-15 11:55:35.582466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.586 [2024-07-15 11:55:35.582636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.586 [2024-07-15 11:55:35.582646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.586 [2024-07-15 11:55:35.582655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.586 [2024-07-15 11:55:35.585512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.586 [2024-07-15 11:55:35.594658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.586 [2024-07-15 11:55:35.595172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-15 11:55:35.595190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.586 [2024-07-15 11:55:35.595200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.586 [2024-07-15 11:55:35.595372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.586 [2024-07-15 11:55:35.595542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.586 [2024-07-15 11:55:35.595554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.586 [2024-07-15 11:55:35.595563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.586 [2024-07-15 11:55:35.598235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.586 [2024-07-15 11:55:35.607530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.586 [2024-07-15 11:55:35.608030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-15 11:55:35.608048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.586 [2024-07-15 11:55:35.608058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.586 [2024-07-15 11:55:35.608228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.586 [2024-07-15 11:55:35.608399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.586 [2024-07-15 11:55:35.608410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.586 [2024-07-15 11:55:35.608423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.586 [2024-07-15 11:55:35.611101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.586 [2024-07-15 11:55:35.620556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.586 [2024-07-15 11:55:35.621082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-15 11:55:35.621102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.586 [2024-07-15 11:55:35.621112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.586 [2024-07-15 11:55:35.621281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.586 [2024-07-15 11:55:35.621454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.586 [2024-07-15 11:55:35.621465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.586 [2024-07-15 11:55:35.621474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.586 [2024-07-15 11:55:35.624153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.586 [2024-07-15 11:55:35.633455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.586 [2024-07-15 11:55:35.633979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-15 11:55:35.633996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.586 [2024-07-15 11:55:35.634006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.586 [2024-07-15 11:55:35.634176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.586 [2024-07-15 11:55:35.634346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.586 [2024-07-15 11:55:35.634356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.586 [2024-07-15 11:55:35.634365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.586 [2024-07-15 11:55:35.637035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.586 [2024-07-15 11:55:35.646333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.586 [2024-07-15 11:55:35.646839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-15 11:55:35.646857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.586 [2024-07-15 11:55:35.646867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.586 [2024-07-15 11:55:35.647036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.586 [2024-07-15 11:55:35.647208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.586 [2024-07-15 11:55:35.647217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.586 [2024-07-15 11:55:35.647226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.586 [2024-07-15 11:55:35.649891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.586 [2024-07-15 11:55:35.659350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.586 [2024-07-15 11:55:35.659888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-15 11:55:35.659909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.586 [2024-07-15 11:55:35.659919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.586 [2024-07-15 11:55:35.660089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.586 [2024-07-15 11:55:35.660260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.586 [2024-07-15 11:55:35.660270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.586 [2024-07-15 11:55:35.660279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.586 [2024-07-15 11:55:35.662954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.586 11:55:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:07.586 11:55:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:07.586 11:55:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:07.586 11:55:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:07.586 11:55:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.586 [2024-07-15 11:55:35.672256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.586 [2024-07-15 11:55:35.672717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-15 11:55:35.672736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.586 [2024-07-15 11:55:35.672746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.586 [2024-07-15 11:55:35.672921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.586 [2024-07-15 11:55:35.673092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.586 [2024-07-15 11:55:35.673103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.586 [2024-07-15 11:55:35.673112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.586 [2024-07-15 11:55:35.675785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.586 [2024-07-15 11:55:35.685255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.586 [2024-07-15 11:55:35.685705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.586 [2024-07-15 11:55:35.685723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.586 [2024-07-15 11:55:35.685733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.586 [2024-07-15 11:55:35.685907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.586 [2024-07-15 11:55:35.686079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.586 [2024-07-15 11:55:35.686089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.586 [2024-07-15 11:55:35.686098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.586 [2024-07-15 11:55:35.688771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.846 [2024-07-15 11:55:35.698231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.846 [2024-07-15 11:55:35.698695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-07-15 11:55:35.698716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.846 [2024-07-15 11:55:35.698726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.846 [2024-07-15 11:55:35.698901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.846 [2024-07-15 11:55:35.699071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.846 [2024-07-15 11:55:35.699082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.846 [2024-07-15 11:55:35.699091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.846 [2024-07-15 11:55:35.701761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.846 [2024-07-15 11:55:35.711224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.846 [2024-07-15 11:55:35.711682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-07-15 11:55:35.711699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.846 [2024-07-15 11:55:35.711709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.846 [2024-07-15 11:55:35.711884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.846 [2024-07-15 11:55:35.712054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.846 [2024-07-15 11:55:35.712065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.846 [2024-07-15 11:55:35.712075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.846 11:55:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.846 11:55:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:07.846 11:55:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.846 11:55:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.846 [2024-07-15 11:55:35.714742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.846 [2024-07-15 11:55:35.717566] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.846 11:55:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.846 11:55:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:07.846 11:55:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.846 11:55:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.846 [2024-07-15 11:55:35.724242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.846 [2024-07-15 11:55:35.724749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-07-15 11:55:35.724766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.846 [2024-07-15 11:55:35.724776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.846 [2024-07-15 11:55:35.724952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.846 [2024-07-15 11:55:35.725122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.846 [2024-07-15 11:55:35.725133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.846 [2024-07-15 11:55:35.725142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.846 [2024-07-15 11:55:35.727812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.846 [2024-07-15 11:55:35.737275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.846 [2024-07-15 11:55:35.737781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-07-15 11:55:35.737798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.846 [2024-07-15 11:55:35.737808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.846 [2024-07-15 11:55:35.737983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.846 [2024-07-15 11:55:35.738154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.846 [2024-07-15 11:55:35.738164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.846 [2024-07-15 11:55:35.738173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.846 [2024-07-15 11:55:35.740844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.846 [2024-07-15 11:55:35.750316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.846 [2024-07-15 11:55:35.750818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-07-15 11:55:35.750844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.846 [2024-07-15 11:55:35.750855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.846 [2024-07-15 11:55:35.751026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.846 [2024-07-15 11:55:35.751196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.846 [2024-07-15 11:55:35.751206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.846 [2024-07-15 11:55:35.751215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.846 [2024-07-15 11:55:35.753887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.846 Malloc0 00:29:07.846 11:55:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.846 [2024-07-15 11:55:35.763187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.846 11:55:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:07.846 [2024-07-15 11:55:35.763684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.846 [2024-07-15 11:55:35.763702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.846 [2024-07-15 11:55:35.763712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.846 11:55:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.846 [2024-07-15 11:55:35.763886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.846 11:55:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.846 [2024-07-15 11:55:35.764058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.846 [2024-07-15 11:55:35.764069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.846 [2024-07-15 11:55:35.764078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.846 [2024-07-15 11:55:35.766750] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.846 11:55:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.846 11:55:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:07.846 11:55:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.847 11:55:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.847 [2024-07-15 11:55:35.776060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.847 [2024-07-15 11:55:35.776562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.847 [2024-07-15 11:55:35.776580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2246a70 with addr=10.0.0.2, port=4420 00:29:07.847 [2024-07-15 11:55:35.776590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2246a70 is same with the state(5) to be set 00:29:07.847 [2024-07-15 11:55:35.776759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2246a70 (9): Bad file descriptor 00:29:07.847 [2024-07-15 11:55:35.776935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:07.847 [2024-07-15 11:55:35.776946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:07.847 [2024-07-15 11:55:35.776955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:07.847 [2024-07-15 11:55:35.779630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:07.847 11:55:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.847 11:55:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:07.847 11:55:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:07.847 11:55:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:07.847 [2024-07-15 11:55:35.786214] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:07.847 [2024-07-15 11:55:35.788945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:07.847 11:55:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:07.847 11:55:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2131069 00:29:07.847 [2024-07-15 11:55:35.817358] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:17.826 00:29:17.826 Latency(us) 00:29:17.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.826 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:17.826 Verification LBA range: start 0x0 length 0x4000 00:29:17.826 Nvme1n1 : 15.01 9255.76 36.16 13179.21 0.00 5686.14 835.58 23697.82 00:29:17.826 =================================================================================================================== 00:29:17.826 Total : 9255.76 36.16 13179.21 0.00 5686.14 835.58 23697.82 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:17.826 rmmod nvme_tcp 00:29:17.826 rmmod nvme_fabrics 00:29:17.826 rmmod nvme_keyring 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2131960 ']' 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2131960 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2131960 ']' 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2131960 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2131960 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2131960' 00:29:17.826 killing process with pid 2131960 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2131960 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2131960 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:17.826 11:55:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.202 11:55:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:19.202 00:29:19.202 real 0m27.371s 00:29:19.202 user 1m3.048s 00:29:19.202 sys 0m7.960s 00:29:19.202 11:55:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:19.202 11:55:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:19.202 ************************************ 00:29:19.202 END TEST nvmf_bdevperf 00:29:19.202 ************************************ 00:29:19.202 11:55:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:19.202 11:55:47 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:19.202 11:55:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:19.202 11:55:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:19.202 11:55:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:19.202 ************************************ 00:29:19.202 START TEST nvmf_target_disconnect 00:29:19.202 ************************************ 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:19.202 * Looking for test storage... 00:29:19.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:19.202 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:19.203 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:19.203 11:55:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:19.203 11:55:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:19.203 11:55:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:19.203 11:55:47 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:19.203 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:19.203 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:19.203 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:19.203 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:19.203 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:19.203 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:19.203 11:55:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:19.203 11:55:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:19.203 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:19.203 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:19.203 11:55:47 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:19.203 11:55:47 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:25.771 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:25.771 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:25.771 Found net devices under 0000:af:00.0: cvl_0_0 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:25.771 Found net devices under 0000:af:00.1: cvl_0_1 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:25.771 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:25.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:29:25.772 00:29:25.772 --- 10.0.0.2 ping statistics --- 00:29:25.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.772 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:25.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:29:25.772 00:29:25.772 --- 10.0.0.1 ping statistics --- 00:29:25.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.772 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:25.772 ************************************ 00:29:25.772 START TEST nvmf_target_disconnect_tc1 00:29:25.772 ************************************ 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:25.772 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.772 [2024-07-15 11:55:53.523847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.772 [2024-07-15 11:55:53.523974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd01140 with addr=10.0.0.2, port=4420 00:29:25.772 [2024-07-15 11:55:53.524032] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:25.772 [2024-07-15 11:55:53.524068] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:25.772 [2024-07-15 11:55:53.524096] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:25.772 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:25.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:25.772 Initializing NVMe Controllers 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:29:25.772 00:29:25.772 real 0m0.109s 00:29:25.772 user 0m0.042s 00:29:25.772 sys 0m0.067s 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:25.772 ************************************ 00:29:25.772 END TEST nvmf_target_disconnect_tc1 00:29:25.772 ************************************ 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:25.772 ************************************ 00:29:25.772 START TEST nvmf_target_disconnect_tc2 00:29:25.772 ************************************ 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2137235 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2137235 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2137235 ']' 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.772 11:55:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:25.772 [2024-07-15 11:55:53.652677] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:29:25.772 [2024-07-15 11:55:53.652721] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.772 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.772 [2024-07-15 11:55:53.741462] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:25.772 [2024-07-15 11:55:53.814111] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.772 [2024-07-15 11:55:53.814151] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.772 [2024-07-15 11:55:53.814160] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.772 [2024-07-15 11:55:53.814169] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.772 [2024-07-15 11:55:53.814193] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.772 [2024-07-15 11:55:53.814307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:25.772 [2024-07-15 11:55:53.814865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:25.772 [2024-07-15 11:55:53.814955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:25.772 [2024-07-15 11:55:53.814955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.708 Malloc0 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.708 [2024-07-15 11:55:54.535616] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:26.708 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.709 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.709 [2024-07-15 11:55:54.567889] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.709 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.709 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:26.709 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.709 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.709 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.709 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2137487 00:29:26.709 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:26.709 11:55:54 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:26.709 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.641 11:55:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2137235 00:29:28.641 11:55:56 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Write completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Write completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Write completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Write completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Write completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 [2024-07-15 11:55:56.597357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Write completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Write completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Write completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 [2024-07-15 11:55:56.597584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.641 starting I/O failed 00:29:28.641 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Write completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Write completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Write completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Write completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Write completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Write completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Write completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Write completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Write completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Write completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Write completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 [2024-07-15 11:55:56.597805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Write completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Write completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Write completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 Read completed with error (sct=0, sc=8) 00:29:28.642 starting I/O failed 00:29:28.642 [2024-07-15 11:55:56.598027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.642 [2024-07-15 11:55:56.598296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 11:55:56.598315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 11:55:56.598651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 11:55:56.598665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 11:55:56.598957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 11:55:56.598970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 11:55:56.599200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 11:55:56.599212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 11:55:56.599414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 11:55:56.599426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 11:55:56.599750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 11:55:56.599790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 11:55:56.600144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 11:55:56.600198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 11:55:56.600599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 11:55:56.600640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 11:55:56.600948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 11:55:56.600961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 11:55:56.601279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 11:55:56.601292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 11:55:56.601471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 11:55:56.601483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 11:55:56.601830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 11:55:56.601887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 11:55:56.602160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 11:55:56.602200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 11:55:56.602492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 11:55:56.602532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 11:55:56.602856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 11:55:56.602897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 11:55:56.603238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 11:55:56.603277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 11:55:56.603658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 11:55:56.603698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 11:55:56.604044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 11:55:56.604058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 11:55:56.604328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 11:55:56.604344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.642 [2024-07-15 11:55:56.604680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.642 [2024-07-15 11:55:56.604720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.642 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.605019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.605061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.605305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.605345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.605651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.605691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.605983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.605995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.606276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.606288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.606600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.606639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.607007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.607048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.607293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.607333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.607719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.607759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.608141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.608181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.608489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.608529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.608917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.608957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.609340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.609380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.609693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.609734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.610038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.610078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.610368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.610407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.610747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.610786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.611124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.611165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.611526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.611566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.611955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.611997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.612398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.612438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.612810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.612821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.613142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.613154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.613356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.613369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.613643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.613655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.613884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.613896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.614216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.614229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.614508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.614520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.614840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.614853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.643 [2024-07-15 11:55:56.615212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.643 [2024-07-15 11:55:56.615224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.643 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.615526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.615538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.615842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.615855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.616170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.616183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.616495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.616507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.616819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.616844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.617223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.617264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.617575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.617614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.617966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.617997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.618378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.618423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.618708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.618748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.619058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.619098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.619484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.619523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.619881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.619923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.620315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.620355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.620733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.620773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.621170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.621210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.621583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.621622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.622006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.622047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.622406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.622445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.622847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.622888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.623200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.623240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.623624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.623663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.624045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.624087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.624331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.624371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.624674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.624714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.625092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.625133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.625442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.625482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.625858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.625894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.626190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.626202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.626460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.626473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.626701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.626713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.626950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.626962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.627280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.627292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.627535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.627574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.627933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.627974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.628273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.628313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.628692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.628732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.629106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.629118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.644 [2024-07-15 11:55:56.629411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.644 [2024-07-15 11:55:56.629434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.644 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.629620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.629659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.629967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.630008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.630310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.630350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.630727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.630766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.631084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.631097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.631364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.631376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.631642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.631654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.631922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.631935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.632201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.632213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.632455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.632469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.632717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.632730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.632906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.632918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.633175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.633188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.633418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.633457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.633686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.633725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.634086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.634126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.634440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.634480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.634860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.634900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.635227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.635267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.635557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.635596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.635886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.635926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.636282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.636322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.636610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.636650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.637055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.637096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.637415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.637455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.637734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.637746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.637970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.637982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.638231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.638244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.638562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.638574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.638841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.638853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.639216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.639228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.639612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.639652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.639968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.639991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.640231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.640243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.640471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.640483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.640797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.640809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.641077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.641090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.645 qpair failed and we were unable to recover it. 00:29:28.645 [2024-07-15 11:55:56.641393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.645 [2024-07-15 11:55:56.641405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.641671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.641683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.642014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.642055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.642341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.642381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.642667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.642706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.643069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.643103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.643409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.643448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.643806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.643874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.644253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.644293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.644673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.644713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.645070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.645110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.645345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.645385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.645623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.645668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.646047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.646088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.646411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.646451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.646807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.646854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.647216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.647256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.647636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.647676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.648057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.648097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.648382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.648422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.648797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.648864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.649245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.649285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.649663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.649703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.650033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.650073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.650374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.650414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.650792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.650841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.651231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.651271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.651644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.651684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.651990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.652002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.652160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.652172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.652494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.646 [2024-07-15 11:55:56.652533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.646 qpair failed and we were unable to recover it. 00:29:28.646 [2024-07-15 11:55:56.652905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.652917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.653094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.653106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.653475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.653515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.653891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.653944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.654334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.654375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.654754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.654794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.655190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.655230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.655520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.655560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.655937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.655949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.656204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.656244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.656602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.656641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.656958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.656970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.657220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.657232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.657405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.657417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.657662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.657674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.657989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.658002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.658244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.658256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.658527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.658539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.658771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.658783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.659033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.659045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.659286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.659298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.659522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.659536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.659766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.659805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.660169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.660209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.660533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.660572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.660884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.660897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.661240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.661253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.661487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.661499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.661792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.661804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.661993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.662006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.662320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.662332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.662638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.662650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.662979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.663019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.663380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.663420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.663726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.663765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.664127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.664140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.664452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.664464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.664711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.664723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.664952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.664964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.665159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.665171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.647 [2024-07-15 11:55:56.665401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.647 [2024-07-15 11:55:56.665440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.647 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.665767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.665807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.666223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.666263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.666514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.666553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.666868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.666909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.667189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.667202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.667469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.667481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.667802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.667815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.668067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.668080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.668396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.668408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.668664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.668676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.668899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.668911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.669248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.669261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.669506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.669519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.669840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.669852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.670179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.670219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.670575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.670624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.670919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.670931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.671274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.671309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.671687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.671726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.672104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.672117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.672389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.672403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.672703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.672715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.672952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.672964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.673259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.673271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.673610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.673622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.673780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.673792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.674109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.674122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.674450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.674489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.674793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.674844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.675201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.675213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.675464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.675476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.675886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.675928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.676300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.676312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.676604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.676616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.676918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.676959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.677334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.677373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.677749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.677789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.678181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.678222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.678553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.678593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.678990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.679031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.648 qpair failed and we were unable to recover it. 00:29:28.648 [2024-07-15 11:55:56.679318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.648 [2024-07-15 11:55:56.679358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.679712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.679752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.680073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.680114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.680485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.680524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.680863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.680904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.681293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.681333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.681708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.681747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.682040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.682052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.682358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.682370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.682692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.682731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.683057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.683097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.683473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.683514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.683893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.683933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.684304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.684316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.684609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.684621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.684951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.684963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.685262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.685302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.685681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.685714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.686005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.686017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.686336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.686348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.686630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.686675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.687007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.687047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.687400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.687439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.687811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.687859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.688238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.688277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.688661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.688701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.689025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.689066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.689397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.689437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.689812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.689863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.690164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.690204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.690494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.690534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.690848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.690888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.691231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.691243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.691469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.691481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.691709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.691721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.691946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.691959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.692274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.692286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.692578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.692590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.692826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.692842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.693158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.693187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.693473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.649 [2024-07-15 11:55:56.693512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.649 qpair failed and we were unable to recover it. 00:29:28.649 [2024-07-15 11:55:56.693891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.693932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.694290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.694330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.694716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.694756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.695141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.695154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.695476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.695516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.695894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.695935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.696321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.696361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.696719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.696759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.697095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.697137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.697436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.697448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.697672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.697684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.697955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.697967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.698203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.698215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.698529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.698540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.698858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.698870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.699192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.699232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.699542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.699582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.699961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.700001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.700378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.700418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.700724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.700769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.701149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.701161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.701408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.701420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.701743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.701755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.702070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.702082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.702453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.702493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.702799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.702846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.703167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.703180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.703418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.703430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.703689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.703701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.704033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.704075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.704457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.704496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.704874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.704915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.705294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.705334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.705717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.650 [2024-07-15 11:55:56.705757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.650 qpair failed and we were unable to recover it. 00:29:28.650 [2024-07-15 11:55:56.706140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.706153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.706398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.706428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.706815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.706865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.707115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.707127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.707444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.707457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.707683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.707695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.707937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.707949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.708251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.708264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.708506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.708518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.708836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.708848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.709166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.709206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.709562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.709601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.709996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.710037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.710392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.710431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.710719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.710759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.711085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.711126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.711492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.711532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.711847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.711888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.712216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.712256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.712619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.712658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.712949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.712990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.713287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.713299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.713546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.713558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.713883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.713924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.714278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.714318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.714632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.714677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.715057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.715098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.715398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.715438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.715723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.715763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.716154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.716193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.716511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.716523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.716823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.716889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.717270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.717310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.717639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.717678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.717990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.718031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.718381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.718394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.718643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.718655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.718836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.718848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.719182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.719221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.719609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.719648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.651 [2024-07-15 11:55:56.719966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.651 [2024-07-15 11:55:56.720006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.651 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.720363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.720403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.720791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.720830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.721236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.721276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.721575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.721616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.721982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.722023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.722404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.722444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.722768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.722808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.723059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.723085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.723378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.723391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.723708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.723747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.724052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.724093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.724429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.724442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.724620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.724632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.724858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.724870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.725183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.725223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.725611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.725650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.725907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.725948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.726244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.726284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.726496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.726508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.726773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.726785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.727080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.727093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.727349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.727361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.727700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.727712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.728067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.728080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.728318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.728332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.728590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.728604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.728922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.728961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.729338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.729378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.729681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.729720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.730009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.730050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.730269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.730309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.730619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.730631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.730980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.731020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.731310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.731349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.731706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.731746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.732159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.732200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.732561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.732601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.732914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.732955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.733342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.733381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.733688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.733730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.652 qpair failed and we were unable to recover it. 00:29:28.652 [2024-07-15 11:55:56.734082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.652 [2024-07-15 11:55:56.734095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 11:55:56.734281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 11:55:56.734293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 11:55:56.734518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 11:55:56.734530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 11:55:56.734754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 11:55:56.734766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 11:55:56.735052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 11:55:56.735064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 11:55:56.735326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 11:55:56.735365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 11:55:56.735740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 11:55:56.735780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 11:55:56.736032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 11:55:56.736072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 11:55:56.736442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 11:55:56.736454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 11:55:56.736612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 11:55:56.736624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 11:55:56.736865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 11:55:56.736877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 11:55:56.737122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 11:55:56.737161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 11:55:56.737415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 11:55:56.737455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 11:55:56.737773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 11:55:56.737813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 11:55:56.738117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 11:55:56.738157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 11:55:56.738384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 11:55:56.738396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 11:55:56.738623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 11:55:56.738636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.653 [2024-07-15 11:55:56.738948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.653 [2024-07-15 11:55:56.738960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.653 qpair failed and we were unable to recover it. 00:29:28.926 [2024-07-15 11:55:56.739207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.926 [2024-07-15 11:55:56.739220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.926 qpair failed and we were unable to recover it. 00:29:28.926 [2024-07-15 11:55:56.739524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.926 [2024-07-15 11:55:56.739537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.926 qpair failed and we were unable to recover it. 00:29:28.926 [2024-07-15 11:55:56.739872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.926 [2024-07-15 11:55:56.739885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.926 qpair failed and we were unable to recover it. 00:29:28.926 [2024-07-15 11:55:56.740141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.926 [2024-07-15 11:55:56.740153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.926 qpair failed and we were unable to recover it. 00:29:28.926 [2024-07-15 11:55:56.740472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.926 [2024-07-15 11:55:56.740484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.926 qpair failed and we were unable to recover it. 00:29:28.926 [2024-07-15 11:55:56.740710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.926 [2024-07-15 11:55:56.740722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.926 qpair failed and we were unable to recover it. 00:29:28.926 [2024-07-15 11:55:56.740946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.926 [2024-07-15 11:55:56.740960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.926 qpair failed and we were unable to recover it. 00:29:28.926 [2024-07-15 11:55:56.741313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.926 [2024-07-15 11:55:56.741353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.926 qpair failed and we were unable to recover it. 00:29:28.926 [2024-07-15 11:55:56.741667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.926 [2024-07-15 11:55:56.741707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.926 qpair failed and we were unable to recover it. 00:29:28.926 [2024-07-15 11:55:56.742055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.926 [2024-07-15 11:55:56.742095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.926 qpair failed and we were unable to recover it. 00:29:28.926 [2024-07-15 11:55:56.742475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.926 [2024-07-15 11:55:56.742515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.926 qpair failed and we were unable to recover it. 00:29:28.926 [2024-07-15 11:55:56.742896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.926 [2024-07-15 11:55:56.742938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.926 qpair failed and we were unable to recover it. 00:29:28.926 [2024-07-15 11:55:56.743274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.926 [2024-07-15 11:55:56.743286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.926 qpair failed and we were unable to recover it. 00:29:28.926 [2024-07-15 11:55:56.743485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.926 [2024-07-15 11:55:56.743497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.926 qpair failed and we were unable to recover it. 00:29:28.926 [2024-07-15 11:55:56.743766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.926 [2024-07-15 11:55:56.743778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.926 qpair failed and we were unable to recover it. 00:29:28.926 [2024-07-15 11:55:56.744028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.926 [2024-07-15 11:55:56.744040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.926 qpair failed and we were unable to recover it. 00:29:28.926 [2024-07-15 11:55:56.744356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.926 [2024-07-15 11:55:56.744368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.926 qpair failed and we were unable to recover it. 00:29:28.926 [2024-07-15 11:55:56.744667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.926 [2024-07-15 11:55:56.744679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.926 qpair failed and we were unable to recover it. 00:29:28.926 [2024-07-15 11:55:56.744945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.926 [2024-07-15 11:55:56.744986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.926 qpair failed and we were unable to recover it. 00:29:28.926 [2024-07-15 11:55:56.745228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.745268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.745655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.745695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.745982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.746023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.746302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.746314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.746606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.746619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.746947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.746961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.747186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.747199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.747424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.747436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.747565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.747577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.747906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.747946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.748323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.748363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.748739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.748779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.749109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.749121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.749347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.749360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.749687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.749726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.750126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.750167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.750525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.750537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.750860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.750872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.751167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.751207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.751511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.751551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.751886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.751927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.752306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.752346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.752599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.752638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.752866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.752919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.753215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.753227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.753543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.753555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.753883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.927 [2024-07-15 11:55:56.753924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.927 qpair failed and we were unable to recover it. 00:29:28.927 [2024-07-15 11:55:56.754280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.754328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.754721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.754733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.755047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.755089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.755382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.755422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.755796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.755845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.756169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.756208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.756582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.756621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.756938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.756979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.757359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.757399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.757775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.757815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.758200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.758240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.758495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.758507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.758797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.758809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.759055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.759068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.759297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.759309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.759617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.759629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.759867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.759879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.760118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.760130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.760377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.760389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.760695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.760707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.761001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.761014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.761336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.761348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.761667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.761682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.762067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.762107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.762396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.762409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.762636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.762648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.762964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.762976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.928 qpair failed and we were unable to recover it. 00:29:28.928 [2024-07-15 11:55:56.763244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.928 [2024-07-15 11:55:56.763259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.763489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.763502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.763820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.763837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.764079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.764091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.764392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.764404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.764644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.764657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.764898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.764911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.765169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.765181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.765493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.765506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.765821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.765839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.766102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.766115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.766487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.766527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.766845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.766886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.767258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.767270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.767647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.767688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.768015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.768027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.768321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.768333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.768600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.768612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.768869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.768882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.769197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.769209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.769467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.769480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.769800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.769813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.770173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.770220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.770452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.770491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.770893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.770935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.771290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.771330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.771722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.771761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.772129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.929 [2024-07-15 11:55:56.772141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.929 qpair failed and we were unable to recover it. 00:29:28.929 [2024-07-15 11:55:56.772458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.772470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.772712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.772725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.772900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.772913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.773147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.773161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.773388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.773400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.773699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.773712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.774039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.774052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.774315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.774356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.774749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.774788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.775103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.775116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.775363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.775376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.775756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.775797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.776120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.776166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.776518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.776530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.776755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.776767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.777012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.777024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.777273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.777285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.777463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.777476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.777667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.777707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.777995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.778036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.778255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.778295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.778672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.778712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.779018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.779059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.779417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.779456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.779871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.779912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.780161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.780173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.780436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.780448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.930 [2024-07-15 11:55:56.780774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.930 [2024-07-15 11:55:56.780786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.930 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.781011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.781023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.781200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.781212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.781437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.781449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.781765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.781776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.782039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.782062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.782362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.782374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.782615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.782627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.782941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.782953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.783143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.783155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.783397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.783409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.783606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.783618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.783850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.783863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.784156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.784168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.784484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.784495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.784786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.784798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.785120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.785161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.785469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.785508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.785797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.785845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.786210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.786250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.786546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.786558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.786714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.786725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.787038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.787050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.787274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.787286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.787461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.787473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.787805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.787848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.788091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.788104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.788421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.931 [2024-07-15 11:55:56.788433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.931 qpair failed and we were unable to recover it. 00:29:28.931 [2024-07-15 11:55:56.788657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.932 [2024-07-15 11:55:56.788669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.932 qpair failed and we were unable to recover it. 00:29:28.932 [2024-07-15 11:55:56.788995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.932 [2024-07-15 11:55:56.789007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.932 qpair failed and we were unable to recover it. 00:29:28.932 [2024-07-15 11:55:56.789325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.932 [2024-07-15 11:55:56.789337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.932 qpair failed and we were unable to recover it. 00:29:28.932 [2024-07-15 11:55:56.789616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.932 [2024-07-15 11:55:56.789628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.932 qpair failed and we were unable to recover it. 00:29:28.932 [2024-07-15 11:55:56.789923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.932 [2024-07-15 11:55:56.789935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.932 qpair failed and we were unable to recover it. 00:29:28.932 [2024-07-15 11:55:56.790269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.932 [2024-07-15 11:55:56.790308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.932 qpair failed and we were unable to recover it. 00:29:28.932 [2024-07-15 11:55:56.790552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.932 [2024-07-15 11:55:56.790592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.932 qpair failed and we were unable to recover it. 00:29:28.932 [2024-07-15 11:55:56.790950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.932 [2024-07-15 11:55:56.790990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.932 qpair failed and we were unable to recover it. 00:29:28.932 [2024-07-15 11:55:56.791377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.932 [2024-07-15 11:55:56.791417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.932 qpair failed and we were unable to recover it. 00:29:28.932 [2024-07-15 11:55:56.791797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.932 [2024-07-15 11:55:56.791844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.932 qpair failed and we were unable to recover it. 00:29:28.932 [2024-07-15 11:55:56.792106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.932 [2024-07-15 11:55:56.792145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.932 qpair failed and we were unable to recover it. 00:29:28.932 [2024-07-15 11:55:56.792529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.932 [2024-07-15 11:55:56.792570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.932 qpair failed and we were unable to recover it. 00:29:28.932 [2024-07-15 11:55:56.792884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.932 [2024-07-15 11:55:56.792931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.932 qpair failed and we were unable to recover it. 00:29:28.932 [2024-07-15 11:55:56.793178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.932 [2024-07-15 11:55:56.793191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.932 qpair failed and we were unable to recover it. 00:29:28.932 [2024-07-15 11:55:56.793443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.932 [2024-07-15 11:55:56.793456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.932 qpair failed and we were unable to recover it. 00:29:28.932 [2024-07-15 11:55:56.793766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.932 [2024-07-15 11:55:56.793778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.932 qpair failed and we were unable to recover it. 00:29:28.932 [2024-07-15 11:55:56.794034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.932 [2024-07-15 11:55:56.794047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.932 qpair failed and we were unable to recover it. 00:29:28.932 [2024-07-15 11:55:56.794242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.932 [2024-07-15 11:55:56.794254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.932 qpair failed and we were unable to recover it. 00:29:28.932 [2024-07-15 11:55:56.794502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.932 [2024-07-15 11:55:56.794514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.932 qpair failed and we were unable to recover it. 00:29:28.932 [2024-07-15 11:55:56.794709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.932 [2024-07-15 11:55:56.794748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.932 qpair failed and we were unable to recover it. 00:29:28.932 [2024-07-15 11:55:56.795107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.932 [2024-07-15 11:55:56.795147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.932 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.795446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.795458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.795779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.795792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.796036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.796049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.796359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.796371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.796611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.796623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.796939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.796951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.797290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.797330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.797698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.797737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.798023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.798063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.798370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.798410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.798794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.798842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.799201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.799240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.799531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.799571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.799955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.799996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.800301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.800313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.800608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.800620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.800780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.800793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.801040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.801053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.801355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.801394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.801769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.801809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.802149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.802190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.802548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.802587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.802910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.802962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.803277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.803289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.803604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.803616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.803875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.803888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.804061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.804073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.933 qpair failed and we were unable to recover it. 00:29:28.933 [2024-07-15 11:55:56.804299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.933 [2024-07-15 11:55:56.804311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.804632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.804673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.805033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.805074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.805432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.805444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.805767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.805807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.806122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.806162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.806550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.806589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.806945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.806986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.807283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.807323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.807677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.807716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.808099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.808139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.808433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.808456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.808700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.808712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.808936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.808948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.809120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.809133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.809313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.809325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.809551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.809574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.809878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.809891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.810134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.810146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.810405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.810417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.810711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.810723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.810988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.811000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.811332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.811372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.811729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.811768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.812139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.812180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.812551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.812564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.812788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.934 [2024-07-15 11:55:56.812846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.934 qpair failed and we were unable to recover it. 00:29:28.934 [2024-07-15 11:55:56.813153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.813193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.813566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.813578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.813966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.814012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.814302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.814314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.814555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.814567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.814882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.814895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.815210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.815222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.815545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.815584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.815976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.815988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.816314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.816353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.816754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.816793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.817062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.817075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.817378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.817390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.817682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.817694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.817978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.818016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.818320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.818359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.818666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.818706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.818997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.819037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.819261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.819300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.819663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.819675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.819848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.819860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.820109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.820121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.820347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.820360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.820654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.820666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.820895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.820908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.821204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.821216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.821453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.821464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.935 [2024-07-15 11:55:56.821688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.935 [2024-07-15 11:55:56.821700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.935 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.821885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.821897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.822160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.822173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.822402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.822441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.822677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.822717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.823046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.823087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.823466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.823504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.823772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.823784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.823953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.823965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.824136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.824149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.824332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.824344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.824659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.824672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.824910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.824922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.825118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.825130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.825430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.825442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.825688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.825702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.825944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.825957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.826225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.826237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.826408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.826420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.826690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.826702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.827022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.827062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.827375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.827414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.827536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.827547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.827840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.827853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.828158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.828170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.828416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.828428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.828744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.828757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.828955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.828967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.936 qpair failed and we were unable to recover it. 00:29:28.936 [2024-07-15 11:55:56.829259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.936 [2024-07-15 11:55:56.829271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.829503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.829516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.829809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.829821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.830086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.830099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.830321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.830334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.830510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.830522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.830856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.830897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.831211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.831251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.831448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.831460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.831691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.831730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.832019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.832059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.832349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.832389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.832710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.832751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.833114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.833155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.833451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.833492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.833818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.833870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.834178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.834218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.834579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.834619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.834937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.834978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.835362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.835401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.835749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.835778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.836096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.836137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.836376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.836416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.836626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.836638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.836817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.836829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.837068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.837109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.837348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.837388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.937 [2024-07-15 11:55:56.837700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.937 [2024-07-15 11:55:56.837746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.937 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.838117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.838157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.838412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.838453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.838852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.838893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.839186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.839226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.839396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.839436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.839752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.839792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.840131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.840175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.840418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.840430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.840594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.840607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.840850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.840891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.841213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.841254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.841619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.841659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.841944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.841986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.842295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.842335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.842691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.842730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.842963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.843005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.843305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.843345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.843700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.843740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.844063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.844104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.844457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.844496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.844807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.844878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.845140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.845180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.845471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.845510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.938 qpair failed and we were unable to recover it. 00:29:28.938 [2024-07-15 11:55:56.845887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.938 [2024-07-15 11:55:56.845930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.846236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.846276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.846567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.846607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.846918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.846975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.847197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.847209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.847458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.847470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.847659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.847671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.847994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.848034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.848254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.848294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.848889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.848909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.849207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.849248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.849585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.849626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.849868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.849913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.850158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.850198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.850494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.850534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.850801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.850813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.850937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.850951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.851208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.851220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.851430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.851442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.851708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.851721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.851825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.851841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.852021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.852033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.852260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.852273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.852528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.852540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.852766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.852779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.853058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.939 [2024-07-15 11:55:56.853071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.939 qpair failed and we were unable to recover it. 00:29:28.939 [2024-07-15 11:55:56.853306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.853319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.853546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.853558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.853782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.853794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.854021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.854034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.854215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.854228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.854469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.854509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.854810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.854862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.855105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.855145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.855460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.855501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.855801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.855813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.855988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.856000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.856233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.856273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.856575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.856616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.856949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.856990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.857277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.857318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.857538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.857551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.857888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.857902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.858203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.858216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.858472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.858511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.858819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.858872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.859109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.859150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.859444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.859483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.859679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.859691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.859995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.860007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.860343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.860355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.860693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.860705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.860901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.860914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.940 qpair failed and we were unable to recover it. 00:29:28.940 [2024-07-15 11:55:56.861159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.940 [2024-07-15 11:55:56.861171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.861407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.861419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.861663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.861675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.861860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.861874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.862062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.862102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.862413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.862453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.862739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.862779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.863043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.863084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.863256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.863296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.863504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.863516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.863759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.863771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.864019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.864032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.864296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.864335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.864642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.864682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.864970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.865010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.865311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.865351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.865577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.865617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.865858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.865899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.866209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.866250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.866416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.866456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.866811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.866861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.867039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.867080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.867438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.867450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.867694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.867706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.867967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.867980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.868214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.868227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.868386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.868398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.868586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.941 [2024-07-15 11:55:56.868637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.941 qpair failed and we were unable to recover it. 00:29:28.941 [2024-07-15 11:55:56.868883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.868925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.869287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.869328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.869694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.869771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.870113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.870159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.870537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.870570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.870891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.870934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.871242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.871283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.871490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.871506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.871741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.871758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.871949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.871966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.872135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.872152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.872327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.872344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.872672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.872689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.872880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.872897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.873079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.873096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.873288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.873305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.873583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.873599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.873800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.873816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.873997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.874014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.874194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.874210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.874476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.874492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.874727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.874744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.874986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.875003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.875307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.875323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.875492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.875509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.875676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.875693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.875874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.875890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.876074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.942 [2024-07-15 11:55:56.876091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.942 qpair failed and we were unable to recover it. 00:29:28.942 [2024-07-15 11:55:56.876292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.876332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.876707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.876752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.877118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.877159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.877448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.877487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.877846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.877863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.878106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.878123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.878409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.878449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.878751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.878791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.879027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.879068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.879360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.879376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.879564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.879581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.879885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.879903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.880075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.880091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.880345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.880385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.880692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.880732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.880970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.881011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.881368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.881408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.881639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.881679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.881986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.882026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.882376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.882392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.882625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.882665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.883000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.883040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.883275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.883313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.883594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.883610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.883845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.883862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.884061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.884078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.884402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.884419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.884604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.943 [2024-07-15 11:55:56.884620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.943 qpair failed and we were unable to recover it. 00:29:28.943 [2024-07-15 11:55:56.884863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.884910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.885212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.885252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.885530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.885546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.885779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.885796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.886098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.886116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.886433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.886450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.886766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.886783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.886968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.886985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.887152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.887169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.887342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.887358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.887606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.887623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.887872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.887890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.888193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.888210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.888453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.888469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.888705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.888722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.888920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.888937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.889169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.889186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.889489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.889505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.889696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.889713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.890049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.890066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.890301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.890317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.890553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.890569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.890940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.890981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.891293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.891333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.891621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.891660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.891851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.891892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.892248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.892288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.892644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.944 [2024-07-15 11:55:56.892683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.944 qpair failed and we were unable to recover it. 00:29:28.944 [2024-07-15 11:55:56.892917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 11:55:56.892959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 11:55:56.893252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 11:55:56.893291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 11:55:56.893578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 11:55:56.893618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 11:55:56.893923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 11:55:56.893940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 11:55:56.894122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 11:55:56.894139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.945 qpair failed and we were unable to recover it. 00:29:28.945 [2024-07-15 11:55:56.894377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.945 [2024-07-15 11:55:56.894417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 11:55:56.894658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 11:55:56.894698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 11:55:56.895054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 11:55:56.895095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 11:55:56.895413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 11:55:56.895457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 11:55:56.895761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 11:55:56.895778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 11:55:56.896106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 11:55:56.896123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 11:55:56.896297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 11:55:56.896313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 11:55:56.896502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 11:55:56.896542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 11:55:56.896919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 11:55:56.896966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 11:55:56.897277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 11:55:56.897317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 11:55:56.897544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 11:55:56.897561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 11:55:56.897796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 11:55:56.897812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 11:55:56.897917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 11:55:56.897933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 11:55:56.898174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 11:55:56.898191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 11:55:56.898446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 11:55:56.898462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 11:55:56.898776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 11:55:56.898792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 11:55:56.899061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 11:55:56.899078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 11:55:56.899311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 11:55:56.899328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 11:55:56.899654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.946 [2024-07-15 11:55:56.899670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.946 qpair failed and we were unable to recover it. 00:29:28.946 [2024-07-15 11:55:56.899842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.899859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.900132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.900148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.900270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.900286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.900542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.900582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.900818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.900872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.901109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.901149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.901434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.901474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.901773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.901812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.902148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.902189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.902476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.902515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.902675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.902715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.902960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.903001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.903380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.903420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.903727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.903767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.904082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.904123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.904430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.904476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.904711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.904728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.904929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.904947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.905305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.905345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.905702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.905741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.906113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.906154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.906540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.906580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.906955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.906996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.907215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.907255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.907478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.907518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.907854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.907872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.908107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.908124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.908425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.908441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.908623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.908640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.908839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.908856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.909177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.909217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.909439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.909455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.909633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.909650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.909852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.909892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.910202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.910242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.910482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.910498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.910751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.910767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.910871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.910887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.911211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.911228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.911542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.911581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.911885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.947 [2024-07-15 11:55:56.911925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.947 qpair failed and we were unable to recover it. 00:29:28.947 [2024-07-15 11:55:56.912280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.912320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.912610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.912650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.912882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.912923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.913236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.913276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.913657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.913697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.913999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.914040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.914422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.914462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.914771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.914811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.915187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.915227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.915526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.915543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.915816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.915837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.916097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.916114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.916353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.916369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.916642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.916659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.916917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.916933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.917256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.917273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.917477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.917497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.917826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.917855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.918140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.918157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.918392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.918408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.918717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.918733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.918981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.918998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.919230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.919246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.919439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.919455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.919732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.919772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.920136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.920176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.920456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.920473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.920803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.920820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.921124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.921165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.921531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.921570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.921808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.921861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.922242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.922283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.922511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.922550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.922782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.922798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.923005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.923023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.923288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.923304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.923490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.923506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.923766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.923782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.924036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.924054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.924309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.924326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.948 [2024-07-15 11:55:56.924511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.948 [2024-07-15 11:55:56.924528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.948 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.924792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.924841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.925066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.925107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.925410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.925450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.925808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.925825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.925946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.925963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.926212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.926229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.926556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.926573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.926900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.926917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.927166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.927199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.927554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.927593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.927992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.928033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.928255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.928295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.928603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.928643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.928944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.928985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.929286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.929326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.929652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.929691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.930008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.930035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.930204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.930221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.930505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.930521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.930768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.930785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.931018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.931035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.931340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.931356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.931604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.931621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.931879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.931896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.932147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.932164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.932467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.932484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.932739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.932756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.932951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.932968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.933270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.933287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.933469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.933485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.933670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.933710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.933884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.933925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.934280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.934320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.934527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.934544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.934741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.934757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.935015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.935034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.935362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.935379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.935636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.935652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.935907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.935925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.936182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.936198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.949 [2024-07-15 11:55:56.936523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.949 [2024-07-15 11:55:56.936540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.949 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.936810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.936826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.937141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.937158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.937487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.937533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.937854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.937895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.938134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.938175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.938534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.938574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.938855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.938872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.939140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.939157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.939392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.939409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.939655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.939672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.939972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.939989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.940319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.940336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.940545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.940562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.940747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.940764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.941036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.941053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.941219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.941235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.941454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.941471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.941732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.941772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.942004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.942045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.942445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.942484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.942719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.942759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.943067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.943108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.943475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.943515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.943889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.943930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.944304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.944345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.944623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.944639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.944881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.944898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.945147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.945163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.945486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.945503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.945736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.945753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.946058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.946095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.946488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.946528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.946818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.946868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.947249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.947289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.950 qpair failed and we were unable to recover it. 00:29:28.950 [2024-07-15 11:55:56.947643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.950 [2024-07-15 11:55:56.947683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.948060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.948101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.948321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.948361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.948719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.948759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.949126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.949166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.949408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.949448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.949734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.949774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.950018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.950058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.950414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.950454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.950808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.950863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.951273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.951315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.951692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.951740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.952069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.952111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.952440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.952480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.952735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.952775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.953099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.953140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.953519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.953559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.953913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.953930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.954166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.954183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.954364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.954380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.954683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.954700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.954934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.954951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.955255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.955272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.955616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.955660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.956014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.956055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.956412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.956452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.956823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.956873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.957249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.957289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.957637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.957676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.958031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.958072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.958376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.958416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.958745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.958762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.958996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.959013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.959287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.959304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.959562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.959579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.959838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.959855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.960090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.960109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.951 qpair failed and we were unable to recover it. 00:29:28.951 [2024-07-15 11:55:56.960342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.951 [2024-07-15 11:55:56.960359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.960691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.960707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.960952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.960969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.961221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.961238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.961517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.961533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.961775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.961791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.962056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.962073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.962380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.962421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.962709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.962749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.962959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.962976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.963239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.963279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.963666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.963705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.964029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.964070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.964520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.964598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.964931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.964979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.965346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.965388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.965693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.965733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.966112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.966153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.966465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.966504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.966755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.966795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.967138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.967179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.967487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.967526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.967867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.967884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.968250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.968290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.968621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.968661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.969000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.969041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.969404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.969453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.969772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.969812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.970118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.970135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.970319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.970335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.970571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.970588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.970908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.970925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.971175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.971191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.971552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.971592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.971967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.972008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.972311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.972351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.972704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.972744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.973077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.973118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.973353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.973393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.973566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.973605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.973944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.952 [2024-07-15 11:55:56.973961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.952 qpair failed and we were unable to recover it. 00:29:28.952 [2024-07-15 11:55:56.974150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.974166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.974451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.974468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.974650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.974667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.974933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.974950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.975267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.975284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.975516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.975532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.975883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.975899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.976066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.976083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.976402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.976442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.976679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.976719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.977096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.977113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.977328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.977368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.977753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.977793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.978131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.978148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.978254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.978271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.978598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.978615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.978970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.978986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.979173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.979189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.979546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.979586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.979747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.979786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.980104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.980145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.980456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.980496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.980863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.980903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.981212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.981252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.981631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.981671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.981973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.981997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.982256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.982273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.982556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.982573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.982899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.982916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.983164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.983181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.983439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.983455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.983689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.983705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.983945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.983962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.984290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.984307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.984642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.984682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.985067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.985109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.985356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.985397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.985690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.985730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.986023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.986064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.986396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.986437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.986738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.953 [2024-07-15 11:55:56.986777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.953 qpair failed and we were unable to recover it. 00:29:28.953 [2024-07-15 11:55:56.987007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.987048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.987360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.987401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.987664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.987681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.988003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.988021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.988335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.988375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.988756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.988797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.989101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.989119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.989383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.989400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.989645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.989662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.989809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.989826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.990092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.990109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.990363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.990379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.990617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.990656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.990949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.990991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.991290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.991330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.991637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.991677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.991990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.992007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.992331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.992348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.992525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.992541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.992777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.992793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.993101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.993119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.993371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.993388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.993637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.993654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.994005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.994039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.994202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.994247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.994550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.994602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.994794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.994811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.995098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.995115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.995443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.995459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.995774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.995791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.995989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.996006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.996332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.996349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.996607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.996624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.996804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.996821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.997017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.997034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.997270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.997301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.997557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.997597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.997977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.998018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.998322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.998362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.998708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.998747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.954 [2024-07-15 11:55:56.998986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.954 [2024-07-15 11:55:56.999027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.954 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:56.999429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:56.999469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:56.999692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:56.999709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.000033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.000050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.000305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.000321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.000624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.000641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.000945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.000991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.001366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.001406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.001693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.001733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.002023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.002063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.002416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.002456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.002841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.002858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.003178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.003194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.003442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.003459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.003718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.003769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.004136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.004177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.004478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.004518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.004818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.004858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.005056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.005073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.005326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.005366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.005746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.005796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.006072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.006089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.006325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.006341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.006666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.006682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.006921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.006940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.007215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.007231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.007581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.007597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.007885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.007926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.008223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.008263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.008549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.008589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.008876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.008893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.009165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.009182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.009415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.009432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.009733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.009749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.010021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.010038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.955 [2024-07-15 11:55:57.010294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.955 [2024-07-15 11:55:57.010311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.955 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.010652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.010668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.010912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.010929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.011175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.011192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.011463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.011479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.011714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.011730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.011987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.012004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.012267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.012308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.012592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.012632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.012918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.012934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.013169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.013186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.013479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.013495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.013664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.013680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.014011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.014052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.014361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.014401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.014713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.014753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.015022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.015039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.015225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.015241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.015589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.015605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.015884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.015902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.016098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.016114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.016384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.016424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.016729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.016769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.017117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.017134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.017435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.017452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:28.956 [2024-07-15 11:55:57.017704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.956 [2024-07-15 11:55:57.017721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:28.956 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.017968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.017985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.018255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.018271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.018547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.018563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.018817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.018843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.019043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.019060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.019234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.019250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.019548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.019564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.019811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.019828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.020104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.020122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.020424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.020441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.020676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.020693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.020935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.020952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.021303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.021342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.021638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.021678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.022070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.022112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.022521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.022561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.022862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.022879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.023138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.023155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.023478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.023495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.023777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.023816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.024132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.024173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.024459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.024499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.024870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.024912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.025255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.025271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.025565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.025605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.025844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.025886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.026177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.026217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.026579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.026619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.026918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.026958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.027340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.027381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.027600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.027647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.028027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.028044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.028361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.028378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.231 [2024-07-15 11:55:57.028614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.231 [2024-07-15 11:55:57.028630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.231 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.028865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.028882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.029210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.029226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.029557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.029597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.029950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.029991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.030345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.030385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.030666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.030683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.030866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.030883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.031171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.031187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.031464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.031481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.031736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.031752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.032086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.032103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.032356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.032373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.032696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.032712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.032959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.032976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.033230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.033246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.033478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.033494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.033746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.033762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.034097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.034114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.034440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.034457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.034738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.034777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.035028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.035069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.035472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.035512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.035866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.035907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.036156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.036197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.036560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.036600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.036890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.036931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.037230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.037270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.037570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.037610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.037771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.037817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.038012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.038029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.038216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.038256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.038496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.038536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.038850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.038891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.039212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.039252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.039573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.039613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.039903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.039920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.040024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.040045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.040291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.040307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.040609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.040656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.040951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.232 [2024-07-15 11:55:57.040992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.232 qpair failed and we were unable to recover it. 00:29:29.232 [2024-07-15 11:55:57.041360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.041400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.041633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.041673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.041985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.042002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.042325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.042342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.042591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.042608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.042846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.042863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.043142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.043159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.043484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.043500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.043685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.043702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.043883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.043900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.044123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.044163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.044451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.044491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.044868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.044909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.045268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.045307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.045682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.045722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.046120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.046137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.046401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.046441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.046741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.046780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.047080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.047097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.047333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.047349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.047610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.047649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.047972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.048013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.048316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.048356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.048676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.048716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.049030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.049071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.049362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.049402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.049746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.049797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.050046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.050063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.050368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.050384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.050580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.050597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.050841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.050858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.051111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.051128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.051432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.051448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.051686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.051702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.052021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.052061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.052350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.052389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.052744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.052790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.053085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.053126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.053484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.053524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.053823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.053873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.233 qpair failed and we were unable to recover it. 00:29:29.233 [2024-07-15 11:55:57.054159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.233 [2024-07-15 11:55:57.054198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.054523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.054563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.054809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.054885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.055057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.055097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.055451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.055491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.055853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.055895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.056196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.056236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.056560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.056599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.056957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.056998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.057284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.057324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.057570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.057610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.057893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.057933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.058233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.058272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.058628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.058668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.059043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.059089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.059257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.059274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.059605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.059645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.059952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.059998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.060241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.060258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.060494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.060511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.060744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.060760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.060996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.061012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.061263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.061280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.061518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.061534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.061770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.061786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.062111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.062128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.062375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.062391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.062587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.062603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.062908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.062925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.063176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.063193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.063506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.063523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.063757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.063773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.064045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.064062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.064337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.064353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.064611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.064627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.064860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.064878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.065084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.065103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.065365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.065381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.065682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.065698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.065910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.065926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.066184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.234 [2024-07-15 11:55:57.066200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.234 qpair failed and we were unable to recover it. 00:29:29.234 [2024-07-15 11:55:57.066461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.066478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.066657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.066673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.066922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.066963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.067318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.067358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.067658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.067697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.068006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.068047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.068348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.068388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.068691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.068730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.069074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.069091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.069400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.069441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.069848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.069889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.070136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.070153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.070455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.070471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.070718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.070734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.071012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.071028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.071281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.071297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.071553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.071569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.071872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.071889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.072166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.072182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.072359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.072375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.072585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.072600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.072868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.072884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.073073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.073089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.073401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.073416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.073673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.073689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.073936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.073952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.074253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.074269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.074513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.074529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.074829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.074855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.075156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.075173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.075423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.075439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.075704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.075720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.235 [2024-07-15 11:55:57.075897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.235 [2024-07-15 11:55:57.075913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.235 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.076160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.076176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.076424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.076440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.076671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.076690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.077017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.077033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.077281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.077296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.077544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.077560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.077794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.077810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.077993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.078009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.078269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.078290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.078494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.078510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.078680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.078697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.078903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.078920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.079103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.079118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.079307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.079323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.079528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.079545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.079802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.079818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.080019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.080036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.080350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.080366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.080535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.080553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.080826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.080847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.081056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.081073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.081332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.081373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.081680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.081722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.081970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.082011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.082246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.082262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.082443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.082460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.082654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.082703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.083073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.083116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.083492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.083533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.083855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.083915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.084145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.084187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.084550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.084589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.084894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.084912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.085089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.085107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.085450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.085494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.085864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.085919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.086155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.086171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.086485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.086524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.086825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.086845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.087102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.236 [2024-07-15 11:55:57.087119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.236 qpair failed and we were unable to recover it. 00:29:29.236 [2024-07-15 11:55:57.087297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.087314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.087666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.087706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.088031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.088079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.088375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.088414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.088707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.088746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.088987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.089026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.089380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.089419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.089708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.089749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.089997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.090015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.090342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.090359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.090649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.090666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.090920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.090937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.091275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.091315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.091616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.091655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.091974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.091991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.092296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.092313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.092616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.092633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.092870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.092887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.093085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.093102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.093285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.093301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.093626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.093684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.093998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.094040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.094434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.094474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.094764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.094803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.095161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.095212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.095448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.095488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.095784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.095824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.096026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.096043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.096297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.096314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.096552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.096569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.096817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.096839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.097005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.097021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.097190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.097206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.097551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.097591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.097969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.098010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.098301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.098318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.098566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.098582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.098819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.098848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.099076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.099093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.099343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.099359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.099687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.237 [2024-07-15 11:55:57.099703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.237 qpair failed and we were unable to recover it. 00:29:29.237 [2024-07-15 11:55:57.100053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.100071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.100399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.100418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.100705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.100745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.101125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.101164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.101509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.101525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.101811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.101870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.102186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.102230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.102453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.102492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.102865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.102909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.103252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.103296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.103586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.103627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.104015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.104033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.104276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.104292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.104460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.104476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.104711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.104727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.104982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.104999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.105263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.105279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.105607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.105623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.105927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.105944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.106187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.106227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.106586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.106625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.106981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.106999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.107327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.107343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.107578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.107594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.107812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.107829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.108075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.108091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.108279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.108295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.108482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.108499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.108684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.108724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.109027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.109067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.109422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.109439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.109609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.109625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.109823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.109844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.110015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.110032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.110278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.110318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.110605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.110645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.110944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.110961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.111134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.111151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.111388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.111405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.111651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.111667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.238 [2024-07-15 11:55:57.111849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.238 [2024-07-15 11:55:57.111866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.238 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.112167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.112186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.112379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.112396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.112643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.112659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.112830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.112850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.113098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.113138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.113428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.113468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.113755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.113795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.114155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.114171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.114310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.114349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.114731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.114772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.115119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.115136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.115388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.115424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.115780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.115820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.116205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.116221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.116575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.116593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.116900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.116919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.117237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.117254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.117538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.117556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.117823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.117845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.118012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.118029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.118214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.118230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.118537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.118556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.118744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.118760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.119003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.119020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.119200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.119216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.119500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.119517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.119738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.119755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.120012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.120029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.120263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.120280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.120532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.120548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.120728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.120745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.120952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.120969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.121279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.121319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.239 [2024-07-15 11:55:57.121620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.239 [2024-07-15 11:55:57.121659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.239 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.121976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.122016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.122181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.122221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.122399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.122439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.122741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.122786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.123081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.123098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.123215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.123231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.123470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.123489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.123656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.123673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.123944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.123961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.124200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.124216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.124531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.124548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.124857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.124874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.125067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.125084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.125412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.125429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.125735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.125752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.125942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.125959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.126234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.126251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.126486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.126502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.126700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.126717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.126952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.126969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.127305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.127322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.127556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.127573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.127841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.127858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.128164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.128180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.128440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.128456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.128626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.128642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.128966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.128982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.129325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.129360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.129695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.129735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.130037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.130055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.130291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.130308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.130560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.130577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.130809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.130826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.131242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.131322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.131727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.131771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.132131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.132174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.132477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.132519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.132867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.132909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.133210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.133222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.240 [2024-07-15 11:55:57.133471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.240 [2024-07-15 11:55:57.133483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.240 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.133652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.133664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.133959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.133972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.134209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.134222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.134463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.134476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.134782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.134794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.135044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.135057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.135145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.135160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.135323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.135335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.135517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.135530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.135778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.135790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.136031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.136073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.136362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.136402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.136772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.136812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.137155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.137195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.137512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.137551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.137876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.137889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.138074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.138086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.138328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.138340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.138511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.138523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.138841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.138854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.139175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.139187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.139461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.139501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.139875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.139916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.140262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.140303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.140683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.140720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.141015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.141028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.141253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.141265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.141493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.141505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.141762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.141774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.141961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.141974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.142162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.142202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.142532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.142572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.142995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.143036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.143456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.143508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.143814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.143863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.144165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.144206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.144569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.144609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.144964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.145005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.145310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.145350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.145750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.145790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.241 qpair failed and we were unable to recover it. 00:29:29.241 [2024-07-15 11:55:57.146184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.241 [2024-07-15 11:55:57.146224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.146576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.146616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.146979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.146996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.147323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.147339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.147530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.147547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.147707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.147723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.147978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.148027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.148270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.148311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.148638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.148678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.148996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.149038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.149260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.149301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.149621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.149660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.150037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.150078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.150372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.150388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.150653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.150693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.150990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.151030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.151421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.151460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.151763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.151805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.152075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.152092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.152440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.152486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.152876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.152917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.153301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.153342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.153720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.153760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.154073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.154114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.154522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.154562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.154731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.154771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.155136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.155177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.155481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.155520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.155808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.155858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.156215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.156256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.156492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.156532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.156877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.156894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.157142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.157159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.157390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.157419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.157690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.157704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.157946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.157959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.158094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.158106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.158372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.158384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.158622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.158634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.158954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.159013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.159399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.242 [2024-07-15 11:55:57.159439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.242 qpair failed and we were unable to recover it. 00:29:29.242 [2024-07-15 11:55:57.159756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.159796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.160113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.160153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.160464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.160504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.160895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.160935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.161222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.161262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.161649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.161709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.162063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.162075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.162402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.162414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.162670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.162682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.162912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.162925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.163173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.163185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.163426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.163438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.163612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.163624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.163800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.163812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.164047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.164060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.164296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.164336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.164632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.164671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.164973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.165014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.165393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.165433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.165743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.165784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.166157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.166198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.166581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.166621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.166852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.166894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.167205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.167246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.167480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.167520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.167818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.167873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.168112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.168124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.168348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.168360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.168622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.168634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.168958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.168971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.169277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.169316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.169669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.169709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.170029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.170070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.170359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.170398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.170726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.170766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.171172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.171213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.171505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.171545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.171948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.171989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.172293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.172332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.172657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.172697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.173051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.243 [2024-07-15 11:55:57.173063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.243 qpair failed and we were unable to recover it. 00:29:29.243 [2024-07-15 11:55:57.173238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.173279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.173564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.173604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.173850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.173890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.174217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.174229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.174544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.174555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.174889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.174902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.175139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.175179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.175539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.175579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.175933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.175974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.176194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.176234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.176507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.176519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.176739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.176751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.177064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.177076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.177388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.177401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.177645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.177657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.177837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.177849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.178107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.178147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.178502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.178542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.178858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.178904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.179070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.179082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.179415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.179455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.179769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.179808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.180125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.180166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.180457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.180497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.180807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.180860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.181039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.181051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.181317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.181356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.181642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.181683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.182064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.182105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.182483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.182524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.182905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.182946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.183242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.183256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.183653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.183690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.183962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.184009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.244 [2024-07-15 11:55:57.184389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.244 [2024-07-15 11:55:57.184431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.244 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.184586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.184626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.184934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.184981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.185225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.185242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.185564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.185580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.185781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.185798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.186039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.186056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.186387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.186404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.186640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.186656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.187012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.187029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.187313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.187330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.187595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.187612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.187915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.187932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.188254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.188294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.188584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.188623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.188978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.189019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.189253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.189269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.189523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.189540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.189863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.189880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.190090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.190107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.190368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.190385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.190751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.190790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.191158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.191199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.191495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.191512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.191824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.191848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.192120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.192136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.192447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.192463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.192765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.192782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.193056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.193073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.193315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.193354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.193710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.193749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.194070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.194111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.194463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.194503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.194821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.194870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.195226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.195266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.195619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.195662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.195915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.195956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.196191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.196231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.196519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.196536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.196793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.196810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.197091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.197128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.245 [2024-07-15 11:55:57.197514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.245 [2024-07-15 11:55:57.197553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.245 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.197806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.197855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.198139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.198155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.198274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.198290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.198642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.198658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.198867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.198909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.199147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.199187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.199476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.199517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.199749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.199788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.200036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.200077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.200435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.200475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.200857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.200898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.201137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.201177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.201534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.201574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.201989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.202042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.202278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.202294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.202624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.202641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.202957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.202974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.203158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.203174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.203440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.203480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.203861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.203902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.204253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.204305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.204607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.204646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.204974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.205020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.205272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.205290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.205543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.205560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.205819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.205839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.205969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.205985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.206316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.206332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.206584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.206600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.206852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.206870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.207103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.207120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.207455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.207495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.207796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.207844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.208141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.208157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.208460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.208476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.208751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.208767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.209020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.209037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.209157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.209174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.209408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.209424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.209521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.209537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.209746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.246 [2024-07-15 11:55:57.209786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.246 qpair failed and we were unable to recover it. 00:29:29.246 [2024-07-15 11:55:57.210151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.210192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.210577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.210616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.210992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.211032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.211387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.211404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.211706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.211722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.211959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.211976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.212323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.212340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.212620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.212637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.212822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.212843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.212965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.212986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.213238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.213254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.213563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.213602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.213957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.213998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.214401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.214441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.214741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.214781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.215107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.215148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.215511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.215551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.215770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.215810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.216058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.216099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.216383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.216423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.216714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.216731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.216988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.217005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.217254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.217271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.217576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.217593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.217843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.217861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.218098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.218114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.218348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.218364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.218613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.218629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.218954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.218971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.219204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.219221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.219402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.219418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.219667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.219684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.219942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.219959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.220261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.220277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.220609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.220644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.220932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.220973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.221344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.221384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.221704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.221744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.222098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.222139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.222497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.222538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.222852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.222893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.247 [2024-07-15 11:55:57.223192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.247 [2024-07-15 11:55:57.223232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.247 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.223547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.223588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.223826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.223887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.224192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.224208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.224397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.224414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.224764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.224781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.225107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.225123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.225428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.225445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.225699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.225716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.225896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.225915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.226153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.226170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.226363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.226380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.226569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.226585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.226851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.226868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.227101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.227118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.227443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.227460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.227786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.227803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.228107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.228124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.228375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.228391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.228678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.228694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.228862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.228880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.229158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.229174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.229457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.229497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.229858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.229900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.230278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.230317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.230543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.230560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.230849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.230867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.231117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.231134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.231461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.231496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.231809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.231858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.232149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.232189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.232569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.232609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.232930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.232970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.233339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.233379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.233685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.233725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.248 [2024-07-15 11:55:57.233966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.248 [2024-07-15 11:55:57.234007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.248 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.234310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.234363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.234598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.234615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.234946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.234963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.235295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.235312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.235569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.235586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.235818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.235893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.236144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.236161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.236430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.236447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.236626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.236642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.236944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.236961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.237145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.237162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.237415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.237455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.237754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.237794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.238143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.238222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.238513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.238592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.238941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.238988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.239354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.239395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.239626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.239666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.239988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.240028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.240344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.240356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.240594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.240606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.240763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.240775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.240947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.240959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.241199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.241211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.241445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.241457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.241713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.241725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.241895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.241907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.242220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.242235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.242420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.242433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.242679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.242691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.242918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.242931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.243221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.243234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.243404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.243416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.243674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.243713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.243997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.244038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.244341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.244353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.244615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.244627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.244850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.244863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.245089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.245102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.245412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.245425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.249 [2024-07-15 11:55:57.245663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.249 [2024-07-15 11:55:57.245675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.249 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.245836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.245849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.246183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.246195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.246440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.246453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.246567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.246580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.246666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.246678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.246865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.246878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.247121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.247161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.247471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.247511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.247813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.247863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.248173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.248213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.248492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.248505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.248753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.248765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.249008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.249020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.249183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.249195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.249519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.249558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.249860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.249900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.250201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.250213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.250530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.250542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.250865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.250907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.251211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.251251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.251566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.251605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.251923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.251965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.252269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.252281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.252599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.252611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.252916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.252957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.253313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.253352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.253582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.253596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.253835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.253848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.254069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.254081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.254271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.254283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.254551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.254590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.254919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.254969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.255318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.255331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.255660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.255672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.255906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.255918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.256256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.256268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.256495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.256507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.256746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.256758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.257006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.257019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.257285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.257297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.250 qpair failed and we were unable to recover it. 00:29:29.250 [2024-07-15 11:55:57.257616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.250 [2024-07-15 11:55:57.257628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.257897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.257909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.258132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.258145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.258381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.258394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.258700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.258712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.259031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.259044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.259278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.259291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.259588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.259628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.259878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.259920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.260209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.260249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.260633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.260673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.260918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.260959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.261190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.261230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.261616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.261656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.261967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.262008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.262370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.262410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.262727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.262739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.262986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.263005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.263230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.263242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.263584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.263596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.263923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.263964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.264351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.264390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.264709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.264748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.264915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.264955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.265308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.265347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.265622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.265635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.265932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.265946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.266269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.266310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.266702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.266742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.267099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.267140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.267448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.267488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.267871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.267912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.268267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.268307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.268592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.268632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.268866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.268906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.269309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.269349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.269702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.269742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.270119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.270160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.270516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.270555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.251 [2024-07-15 11:55:57.270917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.251 [2024-07-15 11:55:57.270957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.251 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.271262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.271303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.271613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.271654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.271961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.272002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.272334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.272375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.272662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.272702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.272922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.272962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.273319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.273359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.273589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.273601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.273778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.273790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.274098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.274138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.274440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.274480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.274829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.274844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.275073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.275085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.275379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.275391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.275474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.275486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.275747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.275759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.275938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.275951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.276175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.276187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.276431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.276444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.276757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.276769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.276957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.276970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.277235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.277248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.277350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.277361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.277523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.277535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.277784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.277824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.278128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.278167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.278551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.278602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.278913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.278954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.279287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.279326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.279621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.279660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.280018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.280058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.280290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.252 [2024-07-15 11:55:57.280330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.252 qpair failed and we were unable to recover it. 00:29:29.252 [2024-07-15 11:55:57.280659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.280698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.281068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.281109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.281400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.281440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.281727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.281767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.282077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.282118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.282478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.282518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.282807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.282820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.282993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.283005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.283245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.283257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.283504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.283544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.283877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.283919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.284226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.284266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.284670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.284710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.285026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.285067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.285398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.285439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.285848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.285888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.286197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.286236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.286526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.286566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.286874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.286914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.287291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.287331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.287709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.287749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.288072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.288113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.288403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.288443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.288790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.288822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.289173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.289214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.289602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.289641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.290035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.290076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.290370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.290382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.290614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.290626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.290791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.290803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.291073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.291086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.291380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.291402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.291670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.291681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.291989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.292030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.292397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.292444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.292848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.292889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.293177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.293217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.293542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.293583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.293884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.293925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.294286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.294326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.253 [2024-07-15 11:55:57.294685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.253 [2024-07-15 11:55:57.294725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.253 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.295106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.295146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.295501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.295541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.295828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.295876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.296185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.296228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.296535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.296569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.296793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.296805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.297116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.297129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.297357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.297370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.297612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.297625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.297885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.297899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.298148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.298161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.298416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.298428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.298684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.298696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.298886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.298899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.299140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.299153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.299449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.299461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.299644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.299656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.299907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.299920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.300179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.300191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.300513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.300525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.300778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.300793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.301063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.301076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.301325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.301338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.301654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.301666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.301910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.301923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.302153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.302165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.302410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.302422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.302682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.302695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.302813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.302825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.302988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.303001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.303190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.303202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.303522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.303534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.303828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.303843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.304164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.304202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.304428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.304468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.304789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.304830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.305207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.305249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.305624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.305664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.305971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.306012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.306390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.306430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.254 qpair failed and we were unable to recover it. 00:29:29.254 [2024-07-15 11:55:57.306813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.254 [2024-07-15 11:55:57.306860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.307170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.307210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.307423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.307445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.307601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.307613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.307931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.307944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.308189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.308202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.308369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.308381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.308673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.308713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.309072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.309113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.309381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.309404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.309698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.309710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.309975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.310014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.310304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.310344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.310674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.310714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.311030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.311071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.311456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.311497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.311743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.311782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.312152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.312200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.312427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.312440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.312636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.312649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.312822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.312853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.313152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.313192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.313418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.313457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.313689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.313701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.314042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.314054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.314371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.314383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.314685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.314726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.314964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.315004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.315334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.315374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.315638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.315650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.315891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.315904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.316128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.316141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.316379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.316391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.316564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.316577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.316805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.316818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.317149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.317190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.317420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.317460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.317751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.317763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.318055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.318068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.318303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.318316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.318552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.255 [2024-07-15 11:55:57.318565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.255 qpair failed and we were unable to recover it. 00:29:29.255 [2024-07-15 11:55:57.318755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 11:55:57.318768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 11:55:57.319077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 11:55:57.319089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 11:55:57.319405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 11:55:57.319418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 11:55:57.319583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 11:55:57.319596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 11:55:57.319889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 11:55:57.319902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 11:55:57.320211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 11:55:57.320224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 11:55:57.320450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 11:55:57.320463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 11:55:57.320703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 11:55:57.320715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.256 [2024-07-15 11:55:57.321027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.256 [2024-07-15 11:55:57.321040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.256 qpair failed and we were unable to recover it. 00:29:29.531 [2024-07-15 11:55:57.321334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.531 [2024-07-15 11:55:57.321347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.531 qpair failed and we were unable to recover it. 00:29:29.531 [2024-07-15 11:55:57.321609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.531 [2024-07-15 11:55:57.321621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.531 qpair failed and we were unable to recover it. 00:29:29.531 [2024-07-15 11:55:57.321867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.531 [2024-07-15 11:55:57.321879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.531 qpair failed and we were unable to recover it. 00:29:29.531 [2024-07-15 11:55:57.322211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.531 [2024-07-15 11:55:57.322223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.531 qpair failed and we were unable to recover it. 00:29:29.531 [2024-07-15 11:55:57.322463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.531 [2024-07-15 11:55:57.322475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.531 qpair failed and we were unable to recover it. 00:29:29.531 [2024-07-15 11:55:57.322782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.531 [2024-07-15 11:55:57.322794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.531 qpair failed and we were unable to recover it. 00:29:29.531 [2024-07-15 11:55:57.322991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.531 [2024-07-15 11:55:57.323004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.531 qpair failed and we were unable to recover it. 00:29:29.531 [2024-07-15 11:55:57.323161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.531 [2024-07-15 11:55:57.323174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.323472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.323484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.323782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.323794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.324021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.324037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.324278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.324291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.324584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.324597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.324890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.324903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.325074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.325086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.325315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.325327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.325502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.325514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.325764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.325805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.326164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.326206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.326557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.326593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.326973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.327016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.327373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.327413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.327714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.327754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.328016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.328056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.328424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.328466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.328875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.328917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.329231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.329270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.329521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.329561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.329865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.329905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.330249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.330290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.330593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.330605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.330881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.330893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.331198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.331211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.331505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.331516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.331754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.331767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.331992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.332007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.332262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.332274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.332446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.332458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.332658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.332698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.332931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.332974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.333317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.333359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.333650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.333690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.333953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.333994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.334321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.334361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.334584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.334596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.334784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.334824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.335055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.532 [2024-07-15 11:55:57.335095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.532 qpair failed and we were unable to recover it. 00:29:29.532 [2024-07-15 11:55:57.335316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.335361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.335531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.335543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.335776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.335816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.336191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.336237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.336474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.336486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.336780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.336792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.336967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.336979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.337228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.337241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.337443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.337482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.337729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.337769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.338153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.338194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.338498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.338538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.338813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.338826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.339004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.339016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.339249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.339289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.339512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.339552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.339913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.339954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.340288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.340329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.340659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.340699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.340929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.340971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.341155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.341196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.341480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.341493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.341650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.341663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.341895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.341935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.342221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.342261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.342594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.342634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.342925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.342966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.343296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.343336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.343644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.343684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.343994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.344035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.344373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.344413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.344704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.344744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.345054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.345097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.345341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.345381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.345615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.345654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.345984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.346026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.346271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.346311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.346597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.346638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.346961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.347002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.347302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.347342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.533 [2024-07-15 11:55:57.347717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.533 [2024-07-15 11:55:57.347757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.533 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.348070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.348111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.348410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.348450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.348810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.348885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.349248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.349288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.349550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.349562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.349749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.349762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.350054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.350067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.350236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.350248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.350355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.350377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.350616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.350629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.350808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.350821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.351117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.351130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.351301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.351314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.351484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.351532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.351909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.351950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.352241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.352282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.352542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.352555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.352735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.352748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.352951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.352992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.353294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.353334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.353637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.353650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.353822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.353838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.354094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.354134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.354347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.354361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.354642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.354655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.354893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.354906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.355133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.355146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.355391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.355404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.355646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.355657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.355853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.355865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.356123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.356135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.356306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.356318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.356548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.356561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.356787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.356799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.356966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.356979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.357221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.357234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.357475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.357488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.357786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.357798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.358025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.358038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.358219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.534 [2024-07-15 11:55:57.358231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.534 qpair failed and we were unable to recover it. 00:29:29.534 [2024-07-15 11:55:57.358408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.358420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.358648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.358660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.358978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.358992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.359167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.359180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.359372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.359385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.359637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.359650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.359883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.359896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.360123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.360135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.360371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.360384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.360620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.360633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.360927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.360940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.361044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.361055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.361285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.361297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.361594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.361606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.361846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.361859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.362031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.362044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.362225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.362237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.362499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.362511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.362745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.362757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.362992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.363005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.363265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.363278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.363454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.363466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.363783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.363795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.363974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.363986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.364253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.364265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.364529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.364541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.364714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.364726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.364963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.364976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.365241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.365253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.365509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.365521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.365752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.365765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.365933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.365946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.366104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.366116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.366358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.366370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.366542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.366555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.366781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.366793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.367091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.367104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.367347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.367359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.367555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.367567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.367801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.367814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.535 [2024-07-15 11:55:57.367996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.535 [2024-07-15 11:55:57.368009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.535 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.368247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.368259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.368498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.368512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.368748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.368760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.368987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.369000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.369183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.369195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.369368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.369381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.369554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.369566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.369803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.369815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.369990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.370002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.370181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.370193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.370431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.370444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.370611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.370623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.370850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.370863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.371103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.371115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.371312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.371324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.371503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.371516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.371680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.371692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.371954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.371966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.372157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.372169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.372331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.372344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.372581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.372594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.372871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.372883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.373116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.373128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.373225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.373237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.373403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.373415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.373592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.373604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.373758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.373771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.373943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.373955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.374186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.374198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.374424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.374435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.374662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.374675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.374915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.374927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.536 qpair failed and we were unable to recover it. 00:29:29.536 [2024-07-15 11:55:57.375240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.536 [2024-07-15 11:55:57.375252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.375479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.375491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.375720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.375732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.375916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.375929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.376100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.376112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.376344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.376356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.376624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.376636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.376861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.376873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.377013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.377025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.377248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.377262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.377433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.377445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.377683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.377699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.377855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.377868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.378038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.378050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.378296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.378308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.378467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.378479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.378732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.378744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.378985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.378998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.379224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.379236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.379399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.379410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.379581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.379593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.379817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.379829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.380058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.380070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.380250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.380262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.380517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.380530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.380704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.380716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.380946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.380958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.381120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.381132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.381303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.381315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.381463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.381474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.381632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.381644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.381920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.381933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.382166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.382178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.382349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.382361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.382595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.382607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.382785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.382797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.383028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.383041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.383200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.383212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.383503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.383515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.383744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.383756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.537 [2024-07-15 11:55:57.383983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.537 [2024-07-15 11:55:57.383996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.537 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.384288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.384301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.384469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.384481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.384641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.384653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.384814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.384826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.385090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.385103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.385357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.385369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.385538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.385550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.385788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.385800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.385969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.385983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.386222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.386234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.386410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.386423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.386649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.386661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.387002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.387014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.387204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.387216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.387459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.387471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.387697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.387709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.387891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.387903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.388196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.388209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.388439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.388451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.388695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.388707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.388896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.388908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.389203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.389216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.389378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.389390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.389631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.389644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.389748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.389761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.389996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.390009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.390236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.390248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.390476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.390488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.390653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.390665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.390916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.390928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.391166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.391178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.391418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.391430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.391614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.391626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.391785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.391796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.392105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.392117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.392315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.392350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.392592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.392610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.392845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.392862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.393184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.393201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.393437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.393454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.538 qpair failed and we were unable to recover it. 00:29:29.538 [2024-07-15 11:55:57.393713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.538 [2024-07-15 11:55:57.393730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.393977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.393995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.394186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.394203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.394474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.394490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.394658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.394675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.395007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.395026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.395210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.395227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.395467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.395483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.395718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.395735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.395926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.395946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.396126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.396143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.396383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.396397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.396582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.396594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.396819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.396835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.397078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.397090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.397382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.397394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.397481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.397493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.397758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.397770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.397939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.397951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.398182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.398194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.398419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.398432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.398662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.398674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.398769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92b1f0 is same with the state(5) to be set 00:29:29.539 [2024-07-15 11:55:57.399037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.399057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.399399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.399416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.399740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.399757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.399927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.399945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.400250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.400266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.400522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.400538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.400867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.400885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.401050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.401067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.401332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.401348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.401673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.401690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.401950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.401967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.402150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.402167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.402351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.402368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.402626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.402650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.402858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.402877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.403112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.403129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.403243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.403259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.403507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.403524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.403760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.539 [2024-07-15 11:55:57.403776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.539 qpair failed and we were unable to recover it. 00:29:29.539 [2024-07-15 11:55:57.404087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.404104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.404452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.404469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.404771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.404788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.404990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.405006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.405252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.405268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.405506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.405523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.405854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.405870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.406125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.406142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.406378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.406395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.406571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.406588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.406848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.406865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.407119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.407136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.407436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.407452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.407633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.407649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.407861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.407876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.408192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.408204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.408378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.408391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.408683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.408695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.409038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.409050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.409284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.409297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.409469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.409481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.409734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.409746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.410010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.410022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.410180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.410193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.410369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.410381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.410688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.410700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.410966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.410978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.411151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.411163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.411392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.411404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.411628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.411639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.411949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.411962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.412164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.412204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.412504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.412543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.412824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.540 [2024-07-15 11:55:57.412841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.540 qpair failed and we were unable to recover it. 00:29:29.540 [2024-07-15 11:55:57.413063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.413077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.413384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.413396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.413622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.413635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.413888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.413901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.414078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.414091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.414322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.414362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.414639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.414679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.414980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.415021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.415332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.415372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.415726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.415765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.416166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.416200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.416486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.416525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.416882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.416923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.417239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.417278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.417530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.417570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.417853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.417894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.418197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.418237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.418613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.418653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.418851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.418863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.419113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.419153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.419474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.419514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.419811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.419823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.420003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.420016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.420292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.420305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.420492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.420505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.420665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.420677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.420918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.420931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.421160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.421173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.421448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.421460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.421628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.421640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.421867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.421908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.422222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.422263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.422569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.422620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.422816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.422828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.423001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.423014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.423265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.423305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.423621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.423661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.423991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.424003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.424190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.424203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.424384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.424396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.541 qpair failed and we were unable to recover it. 00:29:29.541 [2024-07-15 11:55:57.424663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.541 [2024-07-15 11:55:57.424708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.425008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.425049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.425340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.425379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.425678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.425717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.425938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.425951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.426283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.426323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.426613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.426652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.426953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.426965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.427203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.427215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.427402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.427414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.427720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.427732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.427910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.427923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.428154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.428166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.428486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.428498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.428668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.428680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.428928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.428941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.429129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.429141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.429336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.429348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.429643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.429655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.429824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.429841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.430025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.430037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.430200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.430212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.430452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.430465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.430641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.430654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.430951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.430992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.431279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.431320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.431631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.431670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.431935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.431977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.432216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.432255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.432498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.432537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.432859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.432899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.433255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.433296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.433600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.433639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.433929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.433969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.434205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.434245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.434606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.434646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.434934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.434946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.435156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.435168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.435354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.435366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.435602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.435642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.435815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.542 [2024-07-15 11:55:57.435870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.542 qpair failed and we were unable to recover it. 00:29:29.542 [2024-07-15 11:55:57.436112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.436152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.436392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.436433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.436673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.436712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.436954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.436966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.437141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.437153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.437350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.437389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.437694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.437733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.437949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.437962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.438130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.438142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.438300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.438312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.438605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.438617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.438858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.438870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.439115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.439128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.439378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.439390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.439584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.439596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.439699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.439711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.439900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.439912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.440103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.440143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.440386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.440426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.440712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.440751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.440951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.440964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.441212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.441224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.441481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.441494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.441729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.441741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.441966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.441979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.442205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.442217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.442448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.442461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.442638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.442650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.442811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.442824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.442984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.442997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.443220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.443232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.443472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.443512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.443755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.443794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.444124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.444160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.444475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.444493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.444669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.444687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.444991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.445009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.445246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.445263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.445531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.445568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.445870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.445920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.543 [2024-07-15 11:55:57.446299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.543 [2024-07-15 11:55:57.446340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.543 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.446557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.446597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.446895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.446912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.447091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.447108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.447374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.447390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.447744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.447785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.448093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.448134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.448428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.448468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.448849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.448891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.449166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.449207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.449508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.449548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.449710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.449750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.450019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.450036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.450211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.450228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.450464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.450480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.450715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.450732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.450970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.450987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.451175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.451192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.451375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.451415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.451792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.451844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.452142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.452159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.452261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.452281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.452625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.452641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.452817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.452837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.453072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.453088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.453337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.453354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.453543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.453560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.453795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.453811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.454072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.454089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.454339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.454356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.454550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.454567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.454800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.454817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.455055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.455072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.455328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.544 [2024-07-15 11:55:57.455344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.544 qpair failed and we were unable to recover it. 00:29:29.544 [2024-07-15 11:55:57.455530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.455546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.455795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.455811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.455984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.456001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.456197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.456236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.456640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.456680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.456964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.456989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.457244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.457261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.457565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.457581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.457837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.457855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.458161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.458179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.458508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.458525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.458800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.458816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.459070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.459087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.459425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.459464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.459753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.459793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.460062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.460099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.460304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.460333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.460653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.460668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.460965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.460978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.461160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.461173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.461416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.461428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.461666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.461706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.462003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.462044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.462344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.462384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.462605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.462645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.462956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.462997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.463224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.463264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.463578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.463618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.463947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.463988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.464232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.464272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.464509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.464549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.464831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.464848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.465033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.465046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.465210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.465223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.465522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.465562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.465874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.465916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.466177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.466189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.466413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.466425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.466666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.466678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.545 [2024-07-15 11:55:57.466863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.545 [2024-07-15 11:55:57.466875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.545 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.467115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.467156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.467405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.467445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.467676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.467716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.468037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.468077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.468439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.468479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.468725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.468776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.469043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.469084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.469327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.469367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.469586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.469625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.469864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.469905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.470156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.470196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.470484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.470523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.470884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.470942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.471181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.471220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.471453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.471493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.471861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.471902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.472228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.472268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.472487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.472527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.472814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.472864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.473171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.473210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.473451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.473491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.473779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.473820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.474135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.474176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.474469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.474509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.474814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.474865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.475116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.475156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.475401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.475440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.475748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.475760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.475915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.475928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.476157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.476198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.476502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.476542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.476826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.476841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.477097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.477110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.477299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.477312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.477537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.477549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.477796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.477845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.478146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.478186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.478405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.478445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.478772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.478812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.546 [2024-07-15 11:55:57.479074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.546 [2024-07-15 11:55:57.479114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.546 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.479277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.479317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.479616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.479656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.479979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.479991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.480220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.480232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.480476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.480487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.480665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.480679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.480863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.480876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.481049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.481062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.481240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.481252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.481495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.481507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.481772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.481784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.482018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.482031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.482202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.482215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.482395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.482408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.482572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.482584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.482759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.482771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.482927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.482940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.483161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.483174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.483404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.483416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.483665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.483677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.483855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.483867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.483974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.483986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.484159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.484172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.484338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.484385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.484618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.484657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.484913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.484968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.485271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.485283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.485511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.485523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.485719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.485731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.485912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.485946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.486182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.486223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.486455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.486494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.486791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.486803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.486981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.486994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.487218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.487231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.487485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.487525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.487745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.487784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.488038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.488079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.488366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.488406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.488626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.488678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.547 [2024-07-15 11:55:57.488916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.547 [2024-07-15 11:55:57.488929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.547 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.489185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.489197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.489434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.489446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.489760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.489771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.489950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.489963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.490219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.490265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.490518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.490559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.490890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.490930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.491221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.491261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.491583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.491623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.491859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.491900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.492188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.492228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.492472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.492512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.492758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.492771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.493017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.493029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.493302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.493342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.493636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.493676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.493919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.493931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.494249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.494262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.494425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.494437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.494623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.494635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.494885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.494898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.495135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.495148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.495333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.495345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.495502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.495515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.495786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.495799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.496112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.496153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.496462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.496502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.496808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.496860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.497239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.497279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.497518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.497558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.497855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.497896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.498201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.498242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.498599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.498639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.498924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.498965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.499193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.499234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.499472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.499512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.499887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.499899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.500141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.500181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.500470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.500510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.500826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.548 [2024-07-15 11:55:57.500874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.548 qpair failed and we were unable to recover it. 00:29:29.548 [2024-07-15 11:55:57.501165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.501205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.501444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.501484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.501785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.501825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.502090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.502102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.502311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.502325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.502567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.502579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.502842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.502855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.503046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.503058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.503226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.503238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.503489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.503529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.503776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.503815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.504114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.504126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.504353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.504364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.504605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.504617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.504779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.504791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.504962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.504975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.505150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.505162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.505320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.505366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.505663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.505704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.505947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.505988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.506269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.506281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.506387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.506399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.506498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.506509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.506667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.506679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.506976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.506989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.507194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.507207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.507399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.507411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.507635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.507648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.549 [2024-07-15 11:55:57.507891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.549 [2024-07-15 11:55:57.507904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.549 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.508066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.508078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.508313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.508353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.508639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.508718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.508987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.509006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.509173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.509190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.509443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.509459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.509770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.509787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.509982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.509999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.510182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.510222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.510447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.510488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.510784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.510825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.511010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.511027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.511212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.511228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.511503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.511543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.511828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.511882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.512189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.512210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.512483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.512500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.512822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.512845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.512960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.512976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.513154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.513170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.513379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.513418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.513797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.513846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.514086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.514102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.514212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.514228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.514494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.514511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.514696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.514712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.515082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.515099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.515384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.515424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.515652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.515692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.515991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.516008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.516271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.516288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.516464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.516481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.516681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.516698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.516879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.516896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.550 qpair failed and we were unable to recover it. 00:29:29.550 [2024-07-15 11:55:57.517150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.550 [2024-07-15 11:55:57.517167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.517421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.517437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.517623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.517640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.517852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.517893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.518129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.518169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.518473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.518512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.518871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.518912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.519199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.519214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.519487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.519500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.519685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.519698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.519938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.519950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.520110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.520122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.520286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.520326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.521457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.521480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.521742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.521755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.522051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.522064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.522308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.522321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.522545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.522558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.522803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.522815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.523008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.523022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.523191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.523204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.523393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.523409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.523643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.523656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.523816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.523828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.524066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.524107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.524360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.524400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.524692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.524731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.525033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.525075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.525307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.525348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.525636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.525675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.526550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.526573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.526825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.526849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.527111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.527125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.527355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.527368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.527472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.527483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.527714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.527758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.528069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.528110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.528420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.528460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.528670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.528710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.529091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.529132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.529905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.551 [2024-07-15 11:55:57.529927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.551 qpair failed and we were unable to recover it. 00:29:29.551 [2024-07-15 11:55:57.530132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.530145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.530379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.530392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.531609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.531632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.531918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.531954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.532208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.532254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.532557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.532598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.532922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.532941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.533199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.533220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.533408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.533424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.533661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.533678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.533789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.533806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.533981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.533998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.534910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.534934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.535241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.535254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.536045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.536067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.536325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.536338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.536587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.536600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.536793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.536806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.537103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.537116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.537299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.537312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.537615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.537655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.537972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.538014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.538305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.538317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.538493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.538505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.538673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.538685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.538897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.538909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.539152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.539164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.539336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.539349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.539588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.539600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.539765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.539777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.540018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.552 [2024-07-15 11:55:57.540030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.552 qpair failed and we were unable to recover it. 00:29:29.552 [2024-07-15 11:55:57.540215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.540228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.540476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.540488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.540750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.540763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.540994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.541007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.541306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.541318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.541460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.541473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.541636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.541648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.541888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.541902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.542247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.542260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.542364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.542376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.542634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.542646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.542813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.542826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.542986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.542998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.543159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.543171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.543418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.543431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.543588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.543600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.543894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.543908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.544080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.544092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.544337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.544350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.544475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.544487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.544779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.544792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.544951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.544964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.545155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.545168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.545376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.545388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.545553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.545565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.545788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.545801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.546055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.546068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.546257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.546269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.546499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.546512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.546755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.546767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.546938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.546951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.547193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.547205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.547450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.547462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.547700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.547712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.547955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.547969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.553 [2024-07-15 11:55:57.548139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.553 [2024-07-15 11:55:57.548152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.553 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.548384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.548397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.548691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.548703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.548966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.548979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.549138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.549152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.549489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.549502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.549793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.549805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.550077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.550090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.550329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.550342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.550518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.550531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.550619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.550631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.550893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.550906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.551153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.551166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.551342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.551355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.551438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.551450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.551674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.551686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.551923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.551937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.552112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.552125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.552384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.552397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.552553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.552566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.552882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.552895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.553140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.553155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.553450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.553463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.553799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.553812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.554063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.554077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.554252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.554265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.554503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.554516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.554681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.554695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.554 [2024-07-15 11:55:57.554989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.554 [2024-07-15 11:55:57.555004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.554 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.555244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.555258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.555479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.555493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.555720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.555733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.556026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.556040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.556238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.556251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.556410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.556422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.556716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.556729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.557069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.557082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.557398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.557410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.557635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.557647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.557899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.557912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.558103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.558116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.558346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.558358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.558535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.558547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.558704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.558717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.558892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.558905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.559201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.559214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.559464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.559476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.559708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.559720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.559909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.559930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.560172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.560190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.560389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.560406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.560643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.560659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.560824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.560847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.561174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.561191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.561437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.561453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.561722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.561739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.562003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.562021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.562283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.562300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.562557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.562574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.562900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.562917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.563242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.563259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.563461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.563478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.563727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.563744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.563914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.563928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.564156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.564169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.564337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.564350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.564585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.564597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.564844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.564856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.555 qpair failed and we were unable to recover it. 00:29:29.555 [2024-07-15 11:55:57.565092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.555 [2024-07-15 11:55:57.565105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.565399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.565412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.565720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.565732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.565958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.565971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.566141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.566154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.566403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.566415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.566639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.566651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.566894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.566912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.567242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.567260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.567521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.567538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.567799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.567815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.568076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.568093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.568331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.568348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.568671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.568688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.568925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.568942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.569138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.569155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.569412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.569429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.569707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.569723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.569967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.569984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.570240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.570257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.570497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.570514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.570785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.570802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.571078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.571095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.571281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.571298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.571550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.571566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.571813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.571827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.572131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.572143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.572316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.572328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.572482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.572494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.572789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.572801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.572909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.572922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.573140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.573153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.573410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.573422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.573738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.573750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.574049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.574067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.574373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.556 [2024-07-15 11:55:57.574389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.556 qpair failed and we were unable to recover it. 00:29:29.556 [2024-07-15 11:55:57.574693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.574709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.574974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.574999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.575193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.575210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.575411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.575428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.575674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.575690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.575937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.575954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.576190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.576208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.576442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.576459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.576744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.576760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.577028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.577045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.577280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.577297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.577397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.577412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.577678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.577695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.577978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.577992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.578316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.578328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.578642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.578654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.578931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.578943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.579182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.579194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.579486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.579498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.579734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.579746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.579909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.579922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.580166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.580178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.580368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.580380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.580670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.580682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.580859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.580872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.581029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.581041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.581297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.581309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.581545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.581557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.581726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.581738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.581981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.581993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.582317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.582329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.582519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.582532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.582783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.582795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.582903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.557 [2024-07-15 11:55:57.582915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.557 qpair failed and we were unable to recover it. 00:29:29.557 [2024-07-15 11:55:57.583086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.583098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.583390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.583402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.583569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.583581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.583854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.583867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.584096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.584110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.584345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.584358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.584549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.584561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.584734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.584746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.584970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.584983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.585133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.585145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.585441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.585453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.585674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.585686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.585912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.585925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.586242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.586254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.586509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.586521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.586629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.586641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.586867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.586879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.587105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.587117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.587389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.587402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.587691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.587703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.587875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.587888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.588114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.588126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.588444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.588456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.588628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.588641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.588865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.588877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.589153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.589165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.589476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.589488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.589657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.589669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.589985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.589997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.590243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.590256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.590518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.590531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.590825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.590841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.558 qpair failed and we were unable to recover it. 00:29:29.558 [2024-07-15 11:55:57.591072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.558 [2024-07-15 11:55:57.591085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.591375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.591388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.591565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.591577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.591751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.591763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.591986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.591999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.592248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.592261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.592457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.592469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.592663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.592675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.593034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.593046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.593315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.593328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.593693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.593705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.593903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.593916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.594175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.594191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.594508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.594521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.594628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.594640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.594925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.594937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.595162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.595175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.595477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.595490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.595666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.595678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.595921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.595934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.596263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.596275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.596523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.596536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.596705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.596717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.597010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.597023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.597173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.597186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.597496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.597508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.597733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.597745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.597988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.598000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.598233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.598245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.598334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.598345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.598587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.598599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.598780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.598792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.599035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.599048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.599295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.599307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.599624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.599636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.599860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.599873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.600138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.600150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.600397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.600409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.600693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.600705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.600894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.600907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.559 qpair failed and we were unable to recover it. 00:29:29.559 [2024-07-15 11:55:57.600996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.559 [2024-07-15 11:55:57.601008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.601260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.601273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.601443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.601455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.601749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.601761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.602010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.602022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.602203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.602215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.602373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.602385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.602650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.602662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.602848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.602860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.603151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.603163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.603396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.603408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.603590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.603602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.603846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.603861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.604110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.604122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.604366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.604378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.604554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.604566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.604816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.604829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.605001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.605014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.605178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.605190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.605471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.605484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.605805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.605818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.606072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.606085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.606300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.606313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.606485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.606497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.606657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.606669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.606895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.606907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.607067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.607079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.607323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.607335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.607586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.607598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.607936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.607949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.608134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.608146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.608234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.608246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.608425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.608437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.608659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.608672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.608917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.608929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.609156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.609168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.609409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.609422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.609535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.609546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.609851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.609864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.610191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.610204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.610395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.560 [2024-07-15 11:55:57.610407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.560 qpair failed and we were unable to recover it. 00:29:29.560 [2024-07-15 11:55:57.610582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.610594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.610748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.610761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.610994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.611007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.611306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.611318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.611639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.611652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.611829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.611844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.612081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.612094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.612362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.612375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.612602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.612615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.612777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.612789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.613022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.613034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.613271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.613285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.613515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.613527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.613717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.613729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.613905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.613917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.614212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.614224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.614402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.614414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.614514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.614526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.614730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.614742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.614988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.615000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.615318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.615331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.615570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.615582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.615811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.615823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.615997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.616010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.616270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.616282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.616601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.616613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.616784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.616797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.617036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.617048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.617289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.617301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.617619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.617631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.617901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.617914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.618160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.618172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.618429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.618441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.618731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.618744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.618921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.618933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.619097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.619109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.619370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.619383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.619621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.619633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.619826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.619843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.620098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.620111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.561 [2024-07-15 11:55:57.620287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.561 [2024-07-15 11:55:57.620300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.561 qpair failed and we were unable to recover it. 00:29:29.562 [2024-07-15 11:55:57.620543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.562 [2024-07-15 11:55:57.620556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.562 qpair failed and we were unable to recover it. 00:29:29.836 [2024-07-15 11:55:57.620782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-07-15 11:55:57.620794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-07-15 11:55:57.621067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-07-15 11:55:57.621080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-07-15 11:55:57.621246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-07-15 11:55:57.621258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-07-15 11:55:57.621514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-07-15 11:55:57.621527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-07-15 11:55:57.621714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-07-15 11:55:57.621726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-07-15 11:55:57.622018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-07-15 11:55:57.622031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-07-15 11:55:57.622270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-07-15 11:55:57.622282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-07-15 11:55:57.622448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-07-15 11:55:57.622460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-07-15 11:55:57.622631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-07-15 11:55:57.622643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-07-15 11:55:57.622808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-07-15 11:55:57.622820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.836 [2024-07-15 11:55:57.623176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.836 [2024-07-15 11:55:57.623189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.836 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.623436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.623449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.623760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.623773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.624016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.624028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.624202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.624214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.624540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.624552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.624887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.624899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.625088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.625100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.625343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.625355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.625649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.625662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.625886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.625899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.626055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.626067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.626382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.626394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.626648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.626660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.626852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.626864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.627102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.627115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.627354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.627367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.627656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.627669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.627994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.628006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.628243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.628256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.628524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.628536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.628782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.628794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.629109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.629122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.629453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.629465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.629694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.629707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.629882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.629895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.630148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.630162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.630354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.630367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.630598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.630610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.630841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.630853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.631186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.631199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.631377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.631389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.631617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.631629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.631887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.631900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.632172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.632185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.632442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.632455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.632649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.632661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.632971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.632984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.633145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.633158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.633420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.633432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.633679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.837 [2024-07-15 11:55:57.633691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.837 qpair failed and we were unable to recover it. 00:29:29.837 [2024-07-15 11:55:57.633938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.633951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.634179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.634191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.634347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.634359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.634661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.634673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.634914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.634927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.635175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.635188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.635431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.635443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.635550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.635561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.635808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.635820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.636091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.636125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.636373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.636391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.636651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.636668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.636847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.636865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.637177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.637194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.637436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.637455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.637720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.637733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.637910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.637923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.638241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.638253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.638492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.638504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.638676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.638688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.638888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.638901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.639070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.639082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.639303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.639315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.639476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.639488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.639719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.639731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.639986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.640001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.640248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.640260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.640525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.640538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.640775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.640787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.641026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.641038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.641331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.641344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.641510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.641523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.641788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.641800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.642044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.642056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.642295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.642307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.642465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.642477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.642712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.642724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.642957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.642970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.643286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.643298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.838 [2024-07-15 11:55:57.643560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.838 [2024-07-15 11:55:57.643572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.838 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.643795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.643808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.644069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.644081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.644256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.644268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.644440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.644452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.644680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.644693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.644986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.644999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.645176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.645189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.645368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.645380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.645548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.645560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.645854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.645866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.646056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.646068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.646385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.646397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.646737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.646749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.647073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.647086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.647340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.647353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.647514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.647526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.647764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.647776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.648069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.648081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.648263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.648275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.648567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.648580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.648836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.648848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.649018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.649030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.649203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.649215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.649439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.649451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.649681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.649693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.649918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.649933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.650251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.650264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.650523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.650535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.650827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.650842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.651087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.651100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.651339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.651351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.651545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.651557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.651783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.651795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.652024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.652036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.652143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.652155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.652313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.652325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.652570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.652582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.652760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.652772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.652963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.652976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.653169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.839 [2024-07-15 11:55:57.653182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.839 qpair failed and we were unable to recover it. 00:29:29.839 [2024-07-15 11:55:57.653415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.653427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.653747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.653760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.654076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.654088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.654381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.654393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.654564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.654576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.654828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.654844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.655016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.655028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.655251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.655263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.655443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.655456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.655693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.655706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.655948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.655960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.656181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.656194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.656491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.656503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.656687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.656700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.657032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.657045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.657340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.657352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.657578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.657590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.657888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.657900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.658223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.658235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.658560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.658573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.658839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.658851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.659166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.659178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.659422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.659435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.659615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.659627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.659924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.659936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.660174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.660188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.660413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.660425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.660664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.660676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.660913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.660925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.661063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.661075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.661323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.661335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.661564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.661576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.661692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.661703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.661882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.661895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.662046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.662058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.662313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.662325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.662556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.662568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.840 [2024-07-15 11:55:57.662869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.840 [2024-07-15 11:55:57.662881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.840 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.663055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.663068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.663302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.663315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.663632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.663644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.663886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.663898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.664138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.664150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.664397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.664410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.664703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.664716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.664940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.664952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.665191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.665203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.665463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.665475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.665648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.665660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.665950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.665962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.666226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.666238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.666386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.666398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.666646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.666658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.666915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.666928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.667201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.667213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.667458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.667470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.667785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.667797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.668027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.668039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.668214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.668227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.668519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.668532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.668708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.668720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.668978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.668991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.669227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.669239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.669462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.669475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.669640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.669652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.669889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.669903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.670059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.670072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.670250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.670263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.670515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.670527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.670773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.670785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.671027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.671040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.671224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.671236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.671465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.671477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.841 [2024-07-15 11:55:57.671793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.841 [2024-07-15 11:55:57.671806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.841 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.671898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.671910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.672147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.672159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.672340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.672352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.672645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.672657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.672958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.672971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.673213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.673226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.673561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.673573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.673810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.673822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.674147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.674160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.674398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.674410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.674635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.674648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.674888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.674900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.675218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.675230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.675491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.675503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.675760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.675772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.676033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.676045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.676225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.676237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.676395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.676407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.676633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.676646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.676818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.676830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.677057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.677070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.677339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.677351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.677527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.677539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.677699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.677712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.677974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.677987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.678157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.678169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.678485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.678498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.678748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.678760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.678855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.678867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.679167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.679180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.679356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.679368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.679602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.679616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.679802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.679815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.680041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.680054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.680346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.680358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.680604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.680616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.680799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.680811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.680939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.680952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.681193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.681206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.842 [2024-07-15 11:55:57.681379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.842 [2024-07-15 11:55:57.681391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.842 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.681558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.681570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.681755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.681767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.681994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.682006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.682230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.682243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.682402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.682414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.682706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.682719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.682879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.682892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.683054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.683067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.683287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.683299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.683526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.683538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.683786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.683799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.684092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.684104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.684365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.684377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.684605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.684617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.684722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.684733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.684891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.684904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.685069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.685081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.685327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.685339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.685500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.685512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.685735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.685748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.685993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.686005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.686349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.686362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.686604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.686616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.686917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.686929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.687117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.687129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.687375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.687388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.687571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.687584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.687843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.687855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.688083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.688096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.688344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.688356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.688648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.688661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.688904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.688918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.689075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.689088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.689192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.689204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.689387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.689399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.689713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.689726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.689963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.689976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.690305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.690318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.690549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.690561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.690807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.843 [2024-07-15 11:55:57.690819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.843 qpair failed and we were unable to recover it. 00:29:29.843 [2024-07-15 11:55:57.691130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.691143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.691379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.691391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.691547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.691560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.691858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.691870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.692038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.692051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.692298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.692311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.692482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.692494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.692726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.692738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.692977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.692989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.693234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.693274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.693565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.693605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.693846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.693886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.694119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.694158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.694478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.694518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.694761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.694801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.695145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.695186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.695546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.695594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.695911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.695924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.696185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.696198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.696438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.696450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.696672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.696684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.696868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.696880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.697144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.697157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.697392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.697404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.697721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.697733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.697977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.697989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.698233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.698245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.698501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.698513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.698807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.698819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.699050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.699062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.699236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.699248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.699498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.699543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.699854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.699895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.700247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.700287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.700576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.700615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.700862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.700874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.701169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.701206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.701496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.701536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.701882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.701923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.844 [2024-07-15 11:55:57.702219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.844 [2024-07-15 11:55:57.702260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.844 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.702560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.702599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.702996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.703036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.703251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.703263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.703528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.703541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.703873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.703886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.704131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.704143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.704306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.704318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.704621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.704661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.704888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.704928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.705179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.705219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.705525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.705565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.705887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.705929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.706146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.706186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.706565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.706605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.706782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.706822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.707122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.707163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.707452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.707464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.707649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.707661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.707955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.707967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.708134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.708146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.708339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.708351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.708583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.708595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.708903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.708944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.709249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.709288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.709614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.709653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.709824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.709894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.710212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.710251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.710500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.710535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.710849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.710861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.711128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.711140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.711475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.711486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.711901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.711946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.712169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.712208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.712442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.845 [2024-07-15 11:55:57.712482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.845 qpair failed and we were unable to recover it. 00:29:29.845 [2024-07-15 11:55:57.712783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.712795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.712963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.712975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.713065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.713076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.713371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.713384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.713613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.713625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.713902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.713914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.714140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.714152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.714424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.714436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.714675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.714687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.714880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.714893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.715079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.715091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.715359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.715399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.715639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.715679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.715951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.715992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.716281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.716320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.716628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.716668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.717043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.717084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.717232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.717244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.717400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.717412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.717573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.717585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.717773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.717786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.718024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.718037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.718269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.718309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.718561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.718602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.718828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.718843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.719081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.719093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.719259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.719271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.719434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.719447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.719680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.719720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.720106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.720147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.720430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.720470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.720758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.720797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.721046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.721087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.721391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.721431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.721699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.721711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.721895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.721908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.722162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.722175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.722405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.722450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.722768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.722808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.846 qpair failed and we were unable to recover it. 00:29:29.846 [2024-07-15 11:55:57.723132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.846 [2024-07-15 11:55:57.723171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.723436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.723448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.723603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.723616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.723847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.723876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.724054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.724066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.724307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.724320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.724489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.724502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.724675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.724686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.724988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.725029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.725261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.725301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.725497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.725509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.725666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.725678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.725851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.725864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.726102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.726115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.726361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.726373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.726629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.726641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.726806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.726818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.727004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.727016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.727242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.727254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.727482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.727495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.727737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.727749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.727972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.727985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.728091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.728102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.728265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.728278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.728522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.728535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.728775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.728788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.729025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.729037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.729231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.729271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.729627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.729666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.729963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.729976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.730316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.730328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.730497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.730509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.730839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.730852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.731037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.731050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.731276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.731288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.731582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.731594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.731777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.731789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.732024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.732036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.732277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.732290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.732525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.847 [2024-07-15 11:55:57.732537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.847 qpair failed and we were unable to recover it. 00:29:29.847 [2024-07-15 11:55:57.732710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.732722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.732965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.732977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.733145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.733158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.733409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.733449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.733809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.733882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.734105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.734145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.734394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.734434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.734671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.734711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.734999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.735040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.735340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.735380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.735578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.735590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.735817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.735830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.736077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.736089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.736244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.736257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.736500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.736540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.736869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.736915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.737148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.737189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.737430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.737470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.737644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.737656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.737829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.737844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.737966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.737978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.738148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.738160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.738315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.738327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.738561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.738573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.738896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.738908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.739144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.739157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.739401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.739413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.739641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.739653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.739771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.739783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.740007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.740020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.740320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.740333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.740573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.740586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.740823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.740839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.741132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.741144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.741499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.741511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.741682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.741694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.741803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.741818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.741917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.741928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.742253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.848 [2024-07-15 11:55:57.742295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.848 qpair failed and we were unable to recover it. 00:29:29.848 [2024-07-15 11:55:57.742521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.742561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.742850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.742891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.743268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.743309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.743590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.743602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.743915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.743927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.744103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.744115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.744286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.744298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.744484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.744524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.744820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.744874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.745121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.745161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.745453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.745493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.745732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.745772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.746094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.746135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.746445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.746485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.746702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.746714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.746800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.746812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.746996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.747009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.747236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.747248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.747425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.747438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.747665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.747704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.748060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.748102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.748341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.748381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.748685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.748724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.749016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.749056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.749357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.749396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.749635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.749675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.749951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.749964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.750209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.750222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.750491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.750514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.750688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.750700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.750921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.750933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.849 qpair failed and we were unable to recover it. 00:29:29.849 [2024-07-15 11:55:57.751171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.849 [2024-07-15 11:55:57.751212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.751457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.751496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.751793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.751845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.752165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.752205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.752490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.752530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.752925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.752966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.753188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.753228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.753525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.753537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.753854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.753884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.754079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.754091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.754335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.754347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.754566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.754578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.754748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.754760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.754993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.755034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.755334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.755375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.755731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.755772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.756014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.756056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.756368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.756407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.756577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.756589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.756816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.756828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.757074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.757115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.757354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.757393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.757712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.757752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.757915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.757957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.758118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.758158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.758514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.758553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.758855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.758896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.759182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.759222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.759525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.759565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.759854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.759883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.760124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.760136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.760457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.760470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.760633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.760645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.760819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.760831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.761022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.761062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.761314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.761360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.761655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.761667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.761933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.761945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.762130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.762142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.850 [2024-07-15 11:55:57.762384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.850 [2024-07-15 11:55:57.762396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.850 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.762562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.762574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.762825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.762875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.763116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.763156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.763521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.763533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.763827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.763843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.764175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.764215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.764431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.764471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.764745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.764785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.765106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.765146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.765501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.765513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.765690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.765702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.765938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.765951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.766241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.766253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.766497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.766520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.766820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.766836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.767011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.767024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.767264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.767304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.767541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.767581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.767868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.767909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.768144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.768184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.768418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.768458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.768790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.768830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.769006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.769046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.769335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.769375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.769733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.769773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.770157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.770198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.770437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.770477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.770789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.770829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.771075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.771115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.771498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.771538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.771843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.771884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.772212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.772253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.772650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.772690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.772988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.773000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.773207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.773219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.773458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.773472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.773659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.773671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.851 [2024-07-15 11:55:57.773989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.851 [2024-07-15 11:55:57.774030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.851 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.774261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.774301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.774580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.774592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.774846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.774859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.775091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.775103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.775268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.775280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.775519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.775531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.775775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.775815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.776224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.776265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.776497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.776509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.776739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.776751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.776934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.776946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.777104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.777116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.777340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.777352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.777601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.777613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.777845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.777858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.778085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.778097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.778252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.778264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.778483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.778496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.778836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.778849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.779191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.779231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.779447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.779487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.779774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.779814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.780048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.780089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.780383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.780422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.780651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.780691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.781010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.781051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.781346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.781386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.781613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.781625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.781801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.781813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.782133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.782146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.782303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.782315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.782466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.782478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.782657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.782670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.782776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.782788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.783013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.783026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.783183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.783195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.783394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.783407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.852 [2024-07-15 11:55:57.783674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.852 [2024-07-15 11:55:57.783718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.852 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.783966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.784006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.784389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.784429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.784719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.784759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.785033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.785074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.785359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.785399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.785683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.785695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.785888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.785900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.786077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.786118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.786404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.786445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.786729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.786769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.786992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.787032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.787338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.787378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.787569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.787582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.787754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.787766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.788006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.788047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.788301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.788342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.788563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.788575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.788780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.788792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.789014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.789027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.789182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.789194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.789447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.789487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.789776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.789816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.790125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.790137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.790389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.790401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.790575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.790587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.790844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.790857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.791096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.791109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.791273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.791285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.791516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.791528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.791804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.791852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.792163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.792202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.792437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.792476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.792863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.792903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.793203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.793243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.793523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.793535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.793641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.793653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.793855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.793896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.794217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.794257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.853 [2024-07-15 11:55:57.794503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.853 [2024-07-15 11:55:57.794543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.853 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.794873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.794920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.795230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.795271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.795511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.795551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.795786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.795825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.796215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.796256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.796562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.796602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.796834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.796847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.797118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.797130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.797358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.797370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.797607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.797619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.797848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.797860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.798110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.798122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.798466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.798506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.798741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.798781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.799033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.799074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.799317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.799356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.799651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.799663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.799908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.799949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.800263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.800303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.800633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.800673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.801028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.801069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.801312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.801352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.801729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.801769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.802134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.802175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.802412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.802452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.802694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.802735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.803016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.803029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.803207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.803219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.803471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.803511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.803811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.803859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.804173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.804213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.854 qpair failed and we were unable to recover it. 00:29:29.854 [2024-07-15 11:55:57.804501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.854 [2024-07-15 11:55:57.804540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.804709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.804721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.804988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.805000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.805226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.805238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.805425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.805437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.805691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.805703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.805996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.806030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.806330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.806370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.806675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.806715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.807022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.807075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.807325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.807365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.807588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.807629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.807912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.807925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.808246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.808258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.808416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.808428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.808667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.808679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.808910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.808922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.809115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.809128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.809353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.809365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.809551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.809564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.809872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.809913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.810205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.810245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.810586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.810626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.810852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.810881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.811076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.811117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.811408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.811449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.811688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.811728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.811962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.812003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.812278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.812319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.812568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.812608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.812850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.812862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.813108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.813120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.813350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.813362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.813599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.813639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.813867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.813908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.814143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.814183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.814548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.814588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.814844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.814856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.815030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.815043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.855 [2024-07-15 11:55:57.815286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.855 [2024-07-15 11:55:57.815326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.855 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.815561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.815600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.815826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.815848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.816098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.816139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.816425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.816466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.816762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.816774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.816946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.816959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.817253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.817293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.817549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.817589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.817774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.817786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.818049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.818064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.818291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.818304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.818626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.818666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.818961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.819003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.819302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.819343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.819651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.819691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.819975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.819987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.820229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.820242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.820467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.820479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.820774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.820786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.821018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.821059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.821306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.821346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.821590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.821630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.821806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.821818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.822062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.822075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.822310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.822350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.822673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.822685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.822847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.822876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.823049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.823061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.823295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.823335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.823672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.823713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.823956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.823990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.824258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.824270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.824572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.824612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.824863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.824904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.825144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.825183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.825538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.825579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.825826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.856 [2024-07-15 11:55:57.825874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.856 qpair failed and we were unable to recover it. 00:29:29.856 [2024-07-15 11:55:57.826029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.826069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.826437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.826477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.826712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.826752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.827065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.827106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.827359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.827399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.827594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.827606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.827872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.827913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.828151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.828191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.828508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.828521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.828689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.828701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.828942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.828955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.829188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.829200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.829432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.829478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.829793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.829841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.830143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.830155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.830342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.830354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.830513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.830525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.830839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.830852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.831109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.831149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.831469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.831509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.831808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.831861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.832081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.832122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.832360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.832401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.832597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.832610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.832911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.832952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.833239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.833278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.833579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.833619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.833928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.833941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.834177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.834190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.834356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.834369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.834554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.834593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.834893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.834933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.835263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.835303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.835661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.835701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.836058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.836099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.836421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.836461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.836786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.836826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.837100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.837140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.837398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.837438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.857 [2024-07-15 11:55:57.837748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.857 [2024-07-15 11:55:57.837789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.857 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.838111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.838152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.838382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.838422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.838709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.838741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.839100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.839141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.839441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.839477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.839824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.839871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.840181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.840221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.840575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.840615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.840973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.841014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.841255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.841295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.841583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.841624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.841925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.841965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.842263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.842309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.842651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.842691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.842978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.843015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.843304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.843344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.843703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.843743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.844104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.844144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.844438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.844478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.844780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.844821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.845077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.845117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.845350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.845391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.845627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.845666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.845945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.845958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.846197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.846210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.846451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.846463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.846636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.846648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.846809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.846822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.846978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.846991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.847323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.847364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.847615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.847656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.848049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.848061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.848219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.848230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.848427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.848468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.848759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.858 [2024-07-15 11:55:57.848799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.858 qpair failed and we were unable to recover it. 00:29:29.858 [2024-07-15 11:55:57.849044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.849057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.849317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.849357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.849745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.849784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.850032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.850046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.850289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.850330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.850641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.850682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.851027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.851041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.851268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.851281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.851538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.851552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.851803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.851816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.852006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.852020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.852263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.852276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.852508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.852521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.852635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.852648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.852811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.852825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.853025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.853038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.853214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.853227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.853396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.853411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.853642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.853655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.853891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.853904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.854137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.854150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.854336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.854350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.854606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.854619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.854917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.854931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.855097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.855111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.855405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.855418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.855606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.855618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.855876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.855890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.856124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.856137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.856371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.856384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.856603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.856616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.856849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.856863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.857093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.857106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.857338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.857351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.857590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.857603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.857856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.857869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.858107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.858121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.858297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.858310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.858502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.858515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.859 [2024-07-15 11:55:57.858687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.859 [2024-07-15 11:55:57.858700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.859 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.858931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.858945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.859170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.859184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.859478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.859492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.859657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.859670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.859840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.859855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.860099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.860112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.860407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.860420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.860669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.860682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.860965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.860979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.861172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.861188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.861421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.861435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.861601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.861614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.861887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.861900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.862134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.862148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.862401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.862414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.862639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.862653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.862856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.862869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.863045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.863062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.863241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.863255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.863487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.863501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.863664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.863678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.863851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.863865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.864090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.864103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.864267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.864281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.864518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.864531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.864772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.864786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.865127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.865141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.865237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.865250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.865546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.865560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.865670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.865684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.865789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.865802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.866004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.866019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.866253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.866268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.866501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.866515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.866811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.866824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.867072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.867085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.867260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.867274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.867520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.867534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.867691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.867704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.867885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.860 [2024-07-15 11:55:57.867899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.860 qpair failed and we were unable to recover it. 00:29:29.860 [2024-07-15 11:55:57.868088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.868101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.868267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.868282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.868526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.868539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.868716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.868729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.869040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.869054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.869341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.869354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.869584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.869597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.869822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.869840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.870136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.870149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.870396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.870410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.870658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.870671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.870883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.870897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.871075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.871088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.871337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.871351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.871646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.871659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.872009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.872023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.872278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.872294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.872523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.872541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.872705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.872719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.872961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.872975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.873214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.873227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.873412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.873426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.873655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.873668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.873890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.873904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.874063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.874078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.874324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.874337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.874496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.874510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.874680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.874693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.874852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.874866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.875041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.875054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.875295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.875308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.875476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.875490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.875665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.875679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.875910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.875925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.876155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.876168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.876426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.876440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.876632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.876645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.876871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.876885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.877095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.861 [2024-07-15 11:55:57.877108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.861 qpair failed and we were unable to recover it. 00:29:29.861 [2024-07-15 11:55:57.877345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.877359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.877525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.877539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.877835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.877849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.878016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.878030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.878254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.878268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.878510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.878524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.878765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.878778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.879010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.879024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.879211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.879224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.879475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.879488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.879576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.879589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.879771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.879784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.880018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.880032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.880200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.880213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.880524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.880538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.880763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.880777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.881035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.881051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.881223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.881237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.881514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.881530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.881726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.881740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.881964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.881978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.882205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.882219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.882395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.882409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.882580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.882593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.882824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.882841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.883157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.883170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.883328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.883341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.883585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.883598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.883842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.883855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.884088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.884102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.884344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.884358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.884585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.884598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.884786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.884799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.885035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.885049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.885145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.885158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.885383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.885395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.885621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.885635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.885808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.885822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.885994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.886008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.886181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.886195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.862 qpair failed and we were unable to recover it. 00:29:29.862 [2024-07-15 11:55:57.886447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.862 [2024-07-15 11:55:57.886461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.886618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.886631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.886795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.886808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.887018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.887032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.887199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.887212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.887378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.887391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.887561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.887574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.887846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.887860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.888030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.888043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.888280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.888293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.888519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.888533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.888778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.888791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.888981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.888995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.889156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.889169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.889268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.889281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.889516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.889529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.889685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.889699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.889860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.889873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.890109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.890124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.890457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.890470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.890710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.890724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.890895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.890908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.891061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.891074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.891231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.891244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.891492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.891505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.891797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.891810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.892064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.892077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.892233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.892246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.892405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.892419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.892734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.892747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.892974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.892988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.893237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.893250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.893450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.893463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.893718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.893731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.893967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.893980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.863 qpair failed and we were unable to recover it. 00:29:29.863 [2024-07-15 11:55:57.894290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.863 [2024-07-15 11:55:57.894304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.894477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.894490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.894660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.894673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.894922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.894935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.895106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.895120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.895367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.895380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.895555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.895568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.895815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.895828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.896010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.896024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.896249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.896262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.896509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.896523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.896685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.896698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.896924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.896938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.897096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.897110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.897335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.897349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.897600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.897613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.897866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.897879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.898125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.898138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.898370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.898410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.898698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.898738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.899029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.899042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.899311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.899324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.899508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.899521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.899701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.899740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.899997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.900038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.900329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.900370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.900679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.900719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.900950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.900964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.901139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.901153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.901401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.901442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.901678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.901717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.901926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.901939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.902198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.902238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.902477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.902518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.902831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.902880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.903170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.903211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.903431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.903471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.903781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.903821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.904232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.904273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.904496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.864 [2024-07-15 11:55:57.904536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.864 qpair failed and we were unable to recover it. 00:29:29.864 [2024-07-15 11:55:57.904823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.904875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.905173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.905213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.905502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.905542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.905854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.905906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.906078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.906091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.906392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.906432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.906734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.906775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.907115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.907128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.907352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.907365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.907521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.907534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.907770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.907815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.908115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.908155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.908376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.908417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.908772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.908812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.909226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.909267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.909487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.909527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.909859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.909900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.910190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.910231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.910591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.910632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.910938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.910951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.911108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.911121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.911302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.911342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.911634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.911674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.912051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.912091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.912396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.912436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.912690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.912730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.913089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.913130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.913361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.913401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.913655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.913703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.913997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.914010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.914198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.914238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.914619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.914659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.914829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.914855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.915047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.915088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.915404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.915444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.915800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.915850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.916079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.916092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.916337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.916351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.916587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.916628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.916882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.865 [2024-07-15 11:55:57.916924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.865 qpair failed and we were unable to recover it. 00:29:29.865 [2024-07-15 11:55:57.917285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.917325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.917647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.917704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.918006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.918047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.918361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.918401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.918729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.918781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.919021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.919034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.919260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.919273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.919432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.919446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.919715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.919755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.920019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.920060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.920424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.920469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.920763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.920803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.921066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.921080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.921349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.921362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.921588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.921601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.921838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.921851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.922023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.922036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.922302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.922315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.922485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.922498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.922731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.922771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.923088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.923128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.923419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.923460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.923814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.923865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.924153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.924166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.924483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.924496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.924789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.924801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.924961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.924975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.925273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.925286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.925609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.925622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.925712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.925725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:29.866 qpair failed and we were unable to recover it. 00:29:29.866 [2024-07-15 11:55:57.925953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.866 [2024-07-15 11:55:57.925966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.926127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.926140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.926432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.926456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.926721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.926734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.927003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.927016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.927343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.927357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.927667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.927680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.927944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.927958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.928252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.928265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.928559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.928572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.928867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.928880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.929115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.929128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.929470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.929484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.929730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.929770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.930024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.930065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.930358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.930398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.930714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.930755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.931145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.931186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.931519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.931559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.931847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.931860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.932028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.932043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.932212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.932251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.932602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.932642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.932933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.932974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.933208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.933248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.933583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.933622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.933994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.934036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.934335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.934348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.934597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.934640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.934997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.142 [2024-07-15 11:55:57.935038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.142 qpair failed and we were unable to recover it. 00:29:30.142 [2024-07-15 11:55:57.935324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.935364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.935620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.935660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.935975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.936014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.936304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.936345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.936709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.936750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.937107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.937154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.937476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.937488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.937754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.937767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.938013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.938051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.938349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.938389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.938751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.938791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.938963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.938977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.939297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.939310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.939554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.939566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.939827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.939879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.940259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.940300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.940586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.940626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.940869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.940910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.941243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.941284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.941509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.941550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.941930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.941971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.942270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.942310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.942668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.942708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.943089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.943130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.943505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.943546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.943903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.943944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.944260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.944301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.944546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.944586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.944873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.944901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.945224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.945265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.945571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.945616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.945898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.945911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.946149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.946162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.946347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.946360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.946684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.946725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.946975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.947016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.947322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.947363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.947685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.947735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.948011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.948051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.143 [2024-07-15 11:55:57.948374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.143 [2024-07-15 11:55:57.948414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.143 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.948652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.948693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.948973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.948986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.949154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.949167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.949412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.949452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.949827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.949878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.950143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.950156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.950526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.950566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.950728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.950768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.951085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.951127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.951454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.951494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.951783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.951823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.952121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.952163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.952425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.952465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.952821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.952890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.953181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.953194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.953418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.953431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.953616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.953656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.953948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.953990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.954268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.954281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.954577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.954617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.954999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.955040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.955292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.955332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.955556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.955596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.955897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.955910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.956164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.956206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.956518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.956559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.956939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.956981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.957339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.957379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.957736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.957776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.958104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.958118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.958349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.958395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.958783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.958818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.959064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.959077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.959312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.959325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.959510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.959523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.959858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.959899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.960220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.960260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.960550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.960591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.960862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.960903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.961241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.144 [2024-07-15 11:55:57.961281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.144 qpair failed and we were unable to recover it. 00:29:30.144 [2024-07-15 11:55:57.961499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.961539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.961855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.961896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.962111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.962124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.962466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.962506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.962739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.962779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.963119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.963132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.963379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.963392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.963733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.963773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.964005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.964046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.964412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.964452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.964783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.964823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.965007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.965054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.965300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.965313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.965704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.965745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.966030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.966072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.966294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.966334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.966640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.966681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.967061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.967102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.967393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.967434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.967675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.967716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.968052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.968093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.968375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.968388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.968709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.968749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.968982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.968995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.969224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.969265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.969481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.969521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.969926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.969968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.970267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.970308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.970542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.970582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.970891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.970932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.971222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.971237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.971547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.971587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.971890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.971931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.972337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.972377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.972611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.972651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.972806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.972865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.973246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.973287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.973569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.973583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.973838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.973851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.974076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.145 [2024-07-15 11:55:57.974090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.145 qpair failed and we were unable to recover it. 00:29:30.145 [2024-07-15 11:55:57.974334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.974348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.974642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.974655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.974884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.974898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.975235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.975274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.975586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.975627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.975882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.975922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.976232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.976272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.976514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.976553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.976935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.976976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.977297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.977337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.977626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.977666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.977975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.978016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.978328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.978369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.978681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.978720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.979047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.979088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.979405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.979445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.979811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.979860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.980130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.980143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.980320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.980334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.980565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.980605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.980859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.980900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.981134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.981175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.981500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.981523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.981747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.981760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.982055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.982068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.982239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.982252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.982582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.982622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.982971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.983012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.983312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.983353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.983732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.983772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.984103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.984118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.984342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.984356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.146 qpair failed and we were unable to recover it. 00:29:30.146 [2024-07-15 11:55:57.984645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.146 [2024-07-15 11:55:57.984658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.984883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.984911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.985134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.985175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.985577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.985616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.985939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.985980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.986225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.986238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.986555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.986568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.986839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.986852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.987039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.987052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.987287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.987327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.987710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.987751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.988118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.988159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.988545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.988586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.988944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.988985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.989275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.989315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.989652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.989693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.989929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.989970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.990328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.990368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.990599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.990639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.990929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.990988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.991158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.991198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.991423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.991463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.991826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.991875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.992182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.992222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.992525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.992565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.992873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.992954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.993274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.993323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.993581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.993624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.993861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.993904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.994209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.994251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.994575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.994615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.994764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.994805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.995056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.995093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.995450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.995490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.995657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.995698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.995922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.995963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.996332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.996353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.996558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.996598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.996820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.996871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.997262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.997303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.997653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.147 [2024-07-15 11:55:57.997694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.147 qpair failed and we were unable to recover it. 00:29:30.147 [2024-07-15 11:55:57.997991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:57.998032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:57.998346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:57.998387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:57.998641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:57.998682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:57.999044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:57.999085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:57.999311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:57.999328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:57.999585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:57.999625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:57.999916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:57.999958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.000256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.000308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.000532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.000572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.000926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.000958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.001266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.001308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.001599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.001645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.002052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.002094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.002405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.002446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.002827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.002880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.003241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.003281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.003582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.003623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.003938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.003957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.004294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.004334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.004659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.004699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.005003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.005021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.005264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.005305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.005684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.005724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.006044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.006086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.006372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.006390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.006561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.006579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.006827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.006848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.007176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.007217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.007450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.007491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.007785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.007826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.008136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.008177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.008557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.008597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.008898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.008939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.009299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.009339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.009719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.009759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.010159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.010201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.010562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.010603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.010940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.010982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.011243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.011267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.148 [2024-07-15 11:55:58.011549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.148 [2024-07-15 11:55:58.011590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.148 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.011881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.011923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.012222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.012262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.012625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.012666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.012957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.012999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.013297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.013350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.013641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.013682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.013983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.014024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.014323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.014363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.014726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.014767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.015137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.015179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.015416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.015456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.015682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.015723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.016102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.016181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.016488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.016563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.016906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.016945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.017233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.017274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.017588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.017629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.018070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.018112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.018286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.018326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.018579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.018619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.018961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.019004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.019315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.019355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.019590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.019631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.019869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.019911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.020241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.020281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.020465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.020515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.020820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.020875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.021169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.021210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.021531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.021571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.021908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.021949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.022260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.022301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.022684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.022724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.023081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.023123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.023362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.023403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.023712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.023752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.023991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.024033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.024344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.024384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.024756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.024796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.025084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.025098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.025423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.025464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.149 qpair failed and we were unable to recover it. 00:29:30.149 [2024-07-15 11:55:58.025793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.149 [2024-07-15 11:55:58.025843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.026153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.026190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.026428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.026469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.026722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.026762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.027080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.027121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.027435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.027476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.027767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.027807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.028111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.028152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.028460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.028473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.028765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.028778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.029016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.029029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.029260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.029273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.029520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.029561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.029870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.029912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.030277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.030317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.030607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.030647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.031025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.031068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.031375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.031388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.031563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.031576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.031758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.031771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.032017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.032059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.032281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.032322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.032609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.032649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.033009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.033032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.033362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.033403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.033780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.033826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.034060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.034101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.034432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.034473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.034807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.034872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.035181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.035222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.035528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.035569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.035952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.035994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.036363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.036404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.036744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.036785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.037189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.037231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.037572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.150 [2024-07-15 11:55:58.037613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.150 qpair failed and we were unable to recover it. 00:29:30.150 [2024-07-15 11:55:58.037915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.037956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.038247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.038287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.038664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.038705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.039093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.039135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.039396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.039410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.039586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.039627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.039988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.040035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.040336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.040376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.040703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.040743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.041033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.041046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.041218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.041258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.041584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.041624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.041778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.041817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.042072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.042113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.042344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.042386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.042740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.042780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.043150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.043230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.043488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.043533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.043929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.043971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.044220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.044261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.044485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.044526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.044785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.044825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.045079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.045120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.045481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.045522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.045755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.045795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.046186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.046238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.046535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.046575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.046868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.046911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.047257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.047298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.047609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.047659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.047989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.048031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.048341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.048382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.048762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.048802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.049125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.049167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.049524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.049566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.049898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.049939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.050268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.050286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.151 [2024-07-15 11:55:58.050455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.151 [2024-07-15 11:55:58.050469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.151 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.050782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.050795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.051039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.051053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.051299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.051335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.051504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.051544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.051855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.051897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.052278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.052319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.052695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.052736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.053113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.053155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.053493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.053534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.053892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.053934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.054235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.054275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.054490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.054503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.054592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.054605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.054862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.054904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.055229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.055270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.055562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.055605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.055917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.055958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.056341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.056383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.056717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.056759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.057078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.057120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.057481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.057522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.057933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.057975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.058221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.058234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.058400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.058440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.058811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.058861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.059163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.059204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.059561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.059602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.059897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.059938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.060222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.060235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.060487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.060528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.060817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.060866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.061191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.061232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.061540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.061581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.061801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.061851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.062074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.062115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.062427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.062467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.062779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.062820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.063078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.063119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.152 [2024-07-15 11:55:58.063498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.152 [2024-07-15 11:55:58.063538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.152 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.063769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.063809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.064204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.064245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.064571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.064611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.065002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.065043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.065363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.065403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.065646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.065686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.066073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.066114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.066493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.066534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.066866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.066908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.067147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.067187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.067499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.067539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.067852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.067894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.068289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.068331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.068568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.068580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.068873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.068886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.069213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.069225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.069395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.069408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.069633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.069647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.069889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.069902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.070268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.070320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.070698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.070711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.070892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.070905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.071234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.071274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.071598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.071639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.071940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.071982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.072354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.072367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.072679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.072692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.072927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.072941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.073278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.073319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.073639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.073679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.073988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.074029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.074361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.074402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.074704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.074744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.075025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.075067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.075447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.075488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.075830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.075880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.076204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.076245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.076622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.076662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.076894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.076936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.153 qpair failed and we were unable to recover it. 00:29:30.153 [2024-07-15 11:55:58.077302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.153 [2024-07-15 11:55:58.077343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.077678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.077719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.078132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.078174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.078552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.078592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.078918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.078960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.079265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.079278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.079568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.079582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.079911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.079953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.080253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.080266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.080588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.080629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.081007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.081049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.081287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.081300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.081502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.081515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.081807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.081820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.082057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.082071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.082369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.082410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.082786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.082828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.083142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.083177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.083425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.083470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.083829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.083879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.084256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.084302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.084629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.084642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.084957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.084970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.085215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.085228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.085485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.085526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.085856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.085898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.086188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.086229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.086552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.086565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.086885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.086928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.087251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.087291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.087641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.087654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.087969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.088011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.088359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.088400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.088701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.088714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.088983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.089024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.089416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.089456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.089702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.089743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.090032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.090073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.090335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.090348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.154 [2024-07-15 11:55:58.090661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.154 [2024-07-15 11:55:58.090702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.154 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.091068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.091108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.091464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.091504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.091845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.091887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.092183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.092223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.092560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.092601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.092902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.092944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.093299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.093340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.093721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.093761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.094088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.094129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.094510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.094551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.094865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.094908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.095189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.095230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.095616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.095656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.095975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.096017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.096397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.096438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.096795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.096845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.097244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.097285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.097664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.097704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.098061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.098103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.098356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.098370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.098708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.098755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.099158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.099201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.099401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.099414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.099670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.099683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.099969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.100010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.100394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.100435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.100679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.100692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.100984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.100997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.101223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.101236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.101463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.101477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.101703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.101716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.102051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.102094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.102403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.155 [2024-07-15 11:55:58.102443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.155 qpair failed and we were unable to recover it. 00:29:30.155 [2024-07-15 11:55:58.102726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.102739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.103084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.103126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.103482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.103522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.103813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.103826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.104156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.104198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.104568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.104609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.104994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.105035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.105353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.105393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.105774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.105815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.106157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.106201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.106450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.106479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.106721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.106762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.107150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.107193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.107430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.107471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.107828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.107846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.108185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.108199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.108544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.108584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.108940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.108982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.109350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.109390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.109690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.109731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.110092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.110133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.110514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.110555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.110850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.110892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.111248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.111289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.111668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.111708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.112088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.112129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.112508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.112548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.112917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.112965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.113284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.113325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.113709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.113750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.114130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.114172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.114527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.114568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.114883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.114926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.115309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.115349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.115663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.115703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.116026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.116068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.116320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.116334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.116510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.116523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.116753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.116793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.117242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.156 [2024-07-15 11:55:58.117323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.156 qpair failed and we were unable to recover it. 00:29:30.156 [2024-07-15 11:55:58.117747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.117792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.118154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.118198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.118556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.118596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.118923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.118966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.119352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.119393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.119773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.119813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.120203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.120245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.120623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.120664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.120924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.120965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.121275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.121315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.121698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.121738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.122030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.122071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.122463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.122503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.122799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.122817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.123014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.123032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.123267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.123285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.123451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.123469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.123779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.123820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.124215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.124256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.124636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.124676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.125059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.125101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.125424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.125464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.125848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.125889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.126270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.126310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.126618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.126658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.127013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.127055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.127369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.127411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.127732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.127779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.128177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.128218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.128601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.128642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.128931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.128949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.129185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.129203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.129490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.129530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.129851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.129893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.130182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.130222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.130605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.130645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.130952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.130994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.131369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.131387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.131720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.131760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.132061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.132103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.132431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.157 [2024-07-15 11:55:58.132471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.157 qpair failed and we were unable to recover it. 00:29:30.157 [2024-07-15 11:55:58.132859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.132901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.133205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.133245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.133628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.133669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.134044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.134085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.134375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.134415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.134773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.134814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.135216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.135256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.135478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.135495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.135807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.135855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.136168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.136208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.136529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.136547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.136884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.136926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.137306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.137356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.137615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.137633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.137813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.137830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.138148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.138188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.138426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.138467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.138871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.138913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.139296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.139337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.139633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.139651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.139943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.139983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.140382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.140423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.140798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.140816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.141150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.141169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.141481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.141521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.141909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.141950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.142288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.142334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.142713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.142754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.143151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.143193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.143578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.143620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.144001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.144042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.144274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.144316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.144676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.144716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.145097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.145138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.145520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.145561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.145855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.145899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.146278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.146319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.146701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.146741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.147061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.147103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.147482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.147522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.158 [2024-07-15 11:55:58.147884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.158 [2024-07-15 11:55:58.147925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.158 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.148243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.148284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.148643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.148683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.148977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.149019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.149399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.149440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.149753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.149793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.150187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.150229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.150547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.150587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.150969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.151011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.151394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.151435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.151830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.151883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.152246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.152287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.152558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.152600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.152989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.153031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.153338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.153379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.153697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.153738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.154054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.154095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.154414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.154455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.154750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.154790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.155194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.155234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.155596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.155636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.156008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.156050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.156376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.156417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.156798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.156848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.157209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.157249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.157578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.157618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.157957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.157978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.158296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.158314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.158597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.158616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.158944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.158962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.159321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.159361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.159608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.159649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.160033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.160076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.160436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.160477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.160863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.160905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.159 [2024-07-15 11:55:58.161264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.159 [2024-07-15 11:55:58.161311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.159 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.161496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.161514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.161772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.161790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.162096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.162133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.162370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.162411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.162829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.162881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.163265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.163313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.163550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.163569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.163813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.163838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.164052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.164070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.164425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.164465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.164855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.164897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.165202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.165243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.165574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.165615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.165943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.165984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.166298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.166339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.166699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.166740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.167129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.167171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.167563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.167605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.167967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.168010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.168338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.168379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.168692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.168732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.169063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.169104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.169489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.169530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.169843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.169861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.170119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.170163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.170522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.170561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.170940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.170983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.171286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.171326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.171724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.171764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.172178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.172220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.172539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.172591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.172980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.173022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.173382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.173423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.173810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.173861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.174264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.174304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.174586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.174604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.174936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.174955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.175269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.175310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.175615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.175656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.175995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.176036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.160 [2024-07-15 11:55:58.176342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.160 [2024-07-15 11:55:58.176383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.160 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.176748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.176789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.177166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.177207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.177589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.177630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.178034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.178076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.178397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.178438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.178794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.178843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.179222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.179264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.179558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.179598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.179890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.179932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.180247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.180288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.180575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.180593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.180949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.180967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.181314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.181355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.181750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.181790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.182115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.182157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.182472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.182513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.182997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.183079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.183417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.183462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.183827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.183887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.184273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.184314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.184626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.184678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.184918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.184937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.185279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.185320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.185616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.185657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.185993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.186028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.186320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.186361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.186721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.186761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.187091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.187133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.187445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.187486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.187874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.187926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.188220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.188261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.188599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.188648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.188993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.189036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.189412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.189453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.189801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.189852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.190145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.190187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.190572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.190612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.190915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.190934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.191231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.191273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.191636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.191654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.161 qpair failed and we were unable to recover it. 00:29:30.161 [2024-07-15 11:55:58.191942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.161 [2024-07-15 11:55:58.191961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.192238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.192278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.192570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.192611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.193004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.193045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.193409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.193450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.193761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.193801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.194199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.194240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.194604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.194644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.194925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.194970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.195320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.195361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.195648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.195667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.196028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.196070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.196380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.196422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.196823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.196873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.197198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.197238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.197602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.197643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.198012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.198054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.198371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.198412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.198699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.198717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.199042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.199101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.199414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.199455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.199765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.199784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.200060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.200079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.200410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.200428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.200789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.200829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.201271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.201312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.201674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.201714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.202025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.202044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.202332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.202350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.202520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.202540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.202739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.202757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.203082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.203124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.203511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.203551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.203861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.203903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.204290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.204330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.204617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.204635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.204999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.205041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.205404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.205452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.205730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.205770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.206085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.206127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.162 [2024-07-15 11:55:58.206516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.162 [2024-07-15 11:55:58.206558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.162 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.206867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.206886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.207173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.207214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.207603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.207644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.208000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.208047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.208440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.208481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.208782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.208800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.209143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.209185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.209575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.209628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.209981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.210023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.210414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.210455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.210853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.210896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.211281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.211322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.211611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.211630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.211993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.212036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.212425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.212467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.212712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.212753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.213029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.213048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.213315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.213333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.213696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.213736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.214054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.214096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.214472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.214514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.214825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.214877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.215173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.215215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.215531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.215572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.215961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.216003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.216311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.216352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.216740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.216781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.217179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.217222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.217613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.217659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.217951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.217970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.218333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.218375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.218761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.218801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.219178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.219220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.219517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.219558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.219944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.219986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.220249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.220290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.220599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.220640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.220980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.163 [2024-07-15 11:55:58.221023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.163 qpair failed and we were unable to recover it. 00:29:30.163 [2024-07-15 11:55:58.221410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.221451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.221769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.221811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.222216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.222257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.222569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.222610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.222908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.222927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.223101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.223120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.223332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.223375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.223671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.223690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.224044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.224087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.224382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.224423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.224799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.224848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.225252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.225294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.225684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.225736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.226083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.226102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.226349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.226367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.226700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.226719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.226982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.227002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.227343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.227362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.227722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.227763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.228187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.228230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.228618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.228658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.229040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.229060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.229305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.229323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.229675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.229716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.230086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.230128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.230505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.230546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.230880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.230899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.164 [2024-07-15 11:55:58.231210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.164 [2024-07-15 11:55:58.231229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.164 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 11:55:58.231546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 11:55:58.231564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 11:55:58.231927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 11:55:58.231950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 11:55:58.232208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 11:55:58.232240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 11:55:58.232597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 11:55:58.232642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 11:55:58.232944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 11:55:58.232987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 11:55:58.233313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 11:55:58.233332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 11:55:58.233592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 11:55:58.233612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 11:55:58.233952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.466 [2024-07-15 11:55:58.233972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.466 qpair failed and we were unable to recover it. 00:29:30.466 [2024-07-15 11:55:58.234299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.234318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.234492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.234511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.234759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.234778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.235081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.235100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.235371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.235389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.235702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.235721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.236095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.236114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.236446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.236465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.236814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.236842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.237176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.237196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.237536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.237555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.237800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.237819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.238106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.238126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.238463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.238482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.238727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.238745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.239058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.239077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.239336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.239355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.239674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.239693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.239940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.239959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.240305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.240323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.240588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.240607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.240874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.240894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.241236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.241255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.241571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.241590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.241860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.241879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.242221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.242240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.242557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.242575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.242865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.242885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.243235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.243254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.243499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.243518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.243804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.243823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.244090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.244110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.244426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.244444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.244726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.244744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.245032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.245052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.245372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.245391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.245754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.245773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.246102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.246121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.246462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.246481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.246726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.246745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.467 [2024-07-15 11:55:58.247090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.467 [2024-07-15 11:55:58.247109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.467 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.247453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.247471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.247797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.247815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.248173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.248192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.248521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.248541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.248885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.248905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.249192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.249210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.249554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.249573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.249819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.249843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.250048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.250066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.250386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.250404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.250770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.250789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.251033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.251053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.251329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.251348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.251611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.251629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.251908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.251926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.252287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.252328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.252722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.252764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.253079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.253122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.253518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.253559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.253883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.253926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.254245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.254291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.254665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.254706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.254989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.255039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.255411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.255452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.255872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.255914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.256215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.256257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.256635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.256677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.257072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.257115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.257510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.257552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.257852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.257871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.258134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.258153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.258444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.258485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.258868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.258910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.259304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.259345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.259699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.259741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.260150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.260193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.260491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.260532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.260917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.260959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.261190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.261231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.261597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.261638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.468 qpair failed and we were unable to recover it. 00:29:30.468 [2024-07-15 11:55:58.262022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.468 [2024-07-15 11:55:58.262065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.262433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.262475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.262776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.262817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.263136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.263178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.263482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.263524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.263919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.263961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.264265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.264306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.264624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.264667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.265062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.265104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.265422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.265464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.265869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.265912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.266305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.266346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.266664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.266705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.267029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.267073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.267469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.267510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.267734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.267775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.268194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.268237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.268572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.268613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.268905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.268924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.269269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.269310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.269611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.269658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.270054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.270097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.270496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.270537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.270926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.270946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.271288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.271307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.271648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.271667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.272013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.272032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.272318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.272337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.272604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.272623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.272805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.272824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.273182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.273202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.273487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.273528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.273863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.273905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.274298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.274340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.274743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.274784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.275119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.275161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.275505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.275546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.275962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.276005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.276409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.276451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.276852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.276895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.277220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.469 [2024-07-15 11:55:58.277261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.469 qpair failed and we were unable to recover it. 00:29:30.469 [2024-07-15 11:55:58.277588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.277629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.277971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.278013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.278389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.278431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.278744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.278785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.279195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.279238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.279635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.279676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.280051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.280094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.280486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.280528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.280899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.280942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.281227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.281246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.281619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.281660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.282058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.282100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.282446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.282487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.282885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.282927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.283254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.283296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.283692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.283733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.284055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.284097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.284470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.284511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.284903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.284923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.285245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.285291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.285679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.285720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.286117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.286161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.286464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.286505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.286901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.286944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.287337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.287379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.287707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.287748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.288182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.288224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.288616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.288657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.289038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.289080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.289381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.289422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.289815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.289869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.290265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.290306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.290655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.290696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.291130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.291173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.291571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.291612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.292006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.292049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.292348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.292390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.292716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.292756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.470 [2024-07-15 11:55:58.293109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.470 [2024-07-15 11:55:58.293151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.470 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.293483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.293524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.293891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.293910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.294257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.294298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.294687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.294729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.295092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.295111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.295380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.295421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.295931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.295951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.296270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.296289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.296628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.296670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.297006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.297049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.297437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.297479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.297873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.297916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.298312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.298354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.298680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.298721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.299129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.299172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.299568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.299610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.299977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.300019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.300391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.300434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.300842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.300884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.301202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.301244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.301682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.301730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.302126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.302169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.302566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.302607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.303001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.303060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.303461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.303503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.303757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.303799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.304138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.304180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.304555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.304597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.304932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.304951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.305146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.305165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.305485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.305525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.305827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.305879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.306284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.306326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.471 [2024-07-15 11:55:58.306654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.471 [2024-07-15 11:55:58.306695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.471 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.307002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.307044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.307439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.307481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.307874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.307916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.308288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.308335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.308638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.308679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.308977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.309019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.309354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.309396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.309700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.309741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.310137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.310156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.310465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.310507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.310812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.310840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.311196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.311238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.311545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.311586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.311963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.312006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.312378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.312420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.312718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.312759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.313039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.313058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.313401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.313420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.313691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.313731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.314047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.314089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.314474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.314516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.314850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.314893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.315291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.315333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.315706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.315748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.316056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.316098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.316515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.316557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.316853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.316899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.317295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.317337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.317719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.317761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.318155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.318198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.318508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.318550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.318838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.318879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.319226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.319268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.319652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.319694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.320092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.320134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.320533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.320575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.320942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.320985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.321378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.321420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.321749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.321790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.322195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.472 [2024-07-15 11:55:58.322237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.472 qpair failed and we were unable to recover it. 00:29:30.472 [2024-07-15 11:55:58.322590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.322631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.322961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.323004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.323405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.323447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.323820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.323874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.324184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.324203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.324384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.324403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.324668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.324710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.325091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.325134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.325528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.325570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.325969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.326009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.326345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.326365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.326695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.326714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.327084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.327127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.327548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.327590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.327985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.328027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.328422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.328464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.328860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.328902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.329225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.329267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.329662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.329703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.330101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.330144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.330468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.330510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.330881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.330924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.331296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.331339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.331656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.331698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.332095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.332138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.332488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.332530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.332935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.332983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.333301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.333320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.333667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.333709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.334131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.334173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.334475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.334517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.334920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.334962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.335283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.335323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.335718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.335760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.336107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.336150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.336478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.336519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.336845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.336887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.337143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.337184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.337486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.337527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.337918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.337961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.473 [2024-07-15 11:55:58.338270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.473 [2024-07-15 11:55:58.338312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.473 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.338685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.338726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.339100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.339146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.339492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.339534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.339940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.339983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.340372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.340414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.340782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.340824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.341147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.341189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.341511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.341552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.341918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.341937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.342254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.342274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.342650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.342692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.343021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.343064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.343379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.343420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.343683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.343724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.344123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.344164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.344559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.344600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.344923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.344966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.345368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.345409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.345749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.345791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.346209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.346253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.346647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.346688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.347090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.347132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.347434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.347477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.347814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.347864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.348182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.348224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.348624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.348672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.349068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.349110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.349479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.349521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.349769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.349810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.350213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.350255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.350651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.350691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.351081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.351124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.351457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.351477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.351803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.351822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.352100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.352142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.352372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.352414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.352737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.352779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.353188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.353230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.353578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.353620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.354022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.354042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.474 [2024-07-15 11:55:58.354323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.474 [2024-07-15 11:55:58.354342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.474 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.354687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.354728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.355124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.355166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.355469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.355512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.355886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.355927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.356254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.356295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.356691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.356745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.357076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.357118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.357446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.357489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.357885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.357927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.358267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.358309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.358681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.358723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.359123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.359166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.359561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.359603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.359982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.360025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.360356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.360375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.360715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.360758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.361048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.361067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.361436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.361479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.361876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.361920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.362238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.362279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.362671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.362714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.363089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.363133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.363526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.363568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.363870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.363913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.364316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.364363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.364677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.364719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.365127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.365170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.365563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.365604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.365907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.365949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.366326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.366368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.366770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.366811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.367187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.367207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.367520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.367541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.367906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.475 [2024-07-15 11:55:58.367926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.475 qpair failed and we were unable to recover it. 00:29:30.475 [2024-07-15 11:55:58.368255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.368275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.368555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.368574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.368815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.368841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.369121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.369140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.369479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.369521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.369876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.369918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.370300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.370342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.370659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.370700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.371016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.371035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.371360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.371402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.371699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.371740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.372061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.372108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.372506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.372548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.372876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.372927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.373190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.373243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.373636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.373677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.374076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.374119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.374523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.374565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.374963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.375006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.375355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.375397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.375697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.375738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.376045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.376083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.376387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.376431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.376798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.376852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.377254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.377295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.377696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.377738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.378033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.378052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.378425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.378467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.378862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.378910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.379271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.379314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.379711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.379759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.380160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.380203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.380602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.380644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.381047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.381090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.381486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.381528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.381921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.381964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.382345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.382387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.382760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.382802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.383151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.383171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.383496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.383539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.383847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.476 [2024-07-15 11:55:58.383891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.476 qpair failed and we were unable to recover it. 00:29:30.476 [2024-07-15 11:55:58.384237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.384279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.384632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.384673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.384994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.385038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.385366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.385407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.385656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.385697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.386003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.386046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.386421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.386462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.386843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.386885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.387209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.387229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.387562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.387603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.387976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.388019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.388414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.388456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.388829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.388883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.389281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.389323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.389697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.389738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.389981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.390023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.390427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.390469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.390865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.390907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.391337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.391379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.391782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.391823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.392192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.392212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.392555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.392574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.392928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.392970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.393370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.393413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.393752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.393794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.394219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.394262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.394647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.394688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.395082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.395125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.395528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.395569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.395876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.395930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.396326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.396368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.396767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.396810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.397151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.397191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.397610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.397652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.398043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.398086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.398436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.398479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.398879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.398923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.399305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.399348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.399679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.477 [2024-07-15 11:55:58.399721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.477 qpair failed and we were unable to recover it. 00:29:30.477 [2024-07-15 11:55:58.400114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.400156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.400459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.400501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.400889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.400908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.401236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.401277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.401684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.401725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.402071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.402113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.402430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.402472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.402859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.402902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.403200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.403219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.403515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.403558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.403936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.403979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.404300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.404319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.404577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.404596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.404846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.404894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.405278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.405319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.405614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.405633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.406005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.406048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.406450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.406491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.406825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.406879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.407295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.407337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.407730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.407771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.408035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.408078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.408430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.408471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.408812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.408876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.409185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.409226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.409622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.409664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.410064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.410107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.410502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.410544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.410788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.410828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.411165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.411207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.411550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.411598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.412004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.412046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.412444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.412486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.412877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.412920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.413274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.413317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.478 [2024-07-15 11:55:58.413704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.478 [2024-07-15 11:55:58.413746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.478 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.414160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.414203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.414545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.414587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.415002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.415045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.415444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.415486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.415860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.415901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.416300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.416342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.416741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.416782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.417181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.417237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.417514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.417533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.417796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.417815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.418084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.418126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.418522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.418564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.418957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.418999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.419325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.419368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.419688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.419730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.420125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.420167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.420429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.420471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.420857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.420899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.421198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.421239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.421638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.421680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.421982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.422024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.422422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.422441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.422787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.422829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.423239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.423280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.423577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.423597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.423888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.423909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.424096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.424138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.424458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.424501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.424880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.424924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.425253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.425295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.425692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.425734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.426059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.426101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.479 [2024-07-15 11:55:58.426327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.479 [2024-07-15 11:55:58.426346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.479 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.426670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.426711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.427103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.427151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.427453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.427495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.427891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.427933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.428327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.428381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.428672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.428690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.428966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.429009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.429428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.429471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.429850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.429895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.430224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.430267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.430597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.430639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.431030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.431071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.431469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.431511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.431881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.431924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.432302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.432344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.432724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.432766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.433111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.433154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.433564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.433605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.433949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.433992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.434383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.434426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.434818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.434871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.435184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.435226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.435540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.435558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.435902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.435922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.436274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.436316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.436712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.436753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.437078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.437122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.437439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.437458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.437816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.437869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.438212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.438254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.438631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.438673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.439000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.439042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.439424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.439467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.439771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.439814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.440157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.440199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.480 [2024-07-15 11:55:58.440524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.480 [2024-07-15 11:55:58.440566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.480 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.440916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.440955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.441245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.441287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.441684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.441727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.442126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.442169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.442495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.442536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.442861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.442909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.443284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.443327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.443699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.443740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.443990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.444033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.444430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.444472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.444855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.444898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.445271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.445321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.445694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.445735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.446108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.446150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.446458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.446477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.446798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.446849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.447221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.447264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.447656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.447699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.448021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.448063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.448435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.448490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.448816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.448875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.449282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.449323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.449696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.449739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.450133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.450176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.450575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.450618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.451014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.451056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.451424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.451466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.451864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.451908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.452180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.452222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.452611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.452653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.453047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.453093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.453454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.453497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.453892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.453941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.454174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.454194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.454517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.454559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.454934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.454977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.455276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.455296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.455552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.455598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.455998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.456041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.456389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.481 [2024-07-15 11:55:58.456435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.481 qpair failed and we were unable to recover it. 00:29:30.481 [2024-07-15 11:55:58.456777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.456818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.457141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.457183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.457507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.457548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.457949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.457997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.458324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.458367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.458751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.458792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.459107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.459150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.459547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.459588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.459913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.459957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.460357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.460399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.460793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.460844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.461220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.461262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.461580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.461626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.461947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.461990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.462312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.462332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.462627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.462669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.463067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.463109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.463520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.463563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.463894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.463937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.464350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.464391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.464716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.464758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.465163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.465206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.465543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.465584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.465905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.465948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.466276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.466318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.466712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.466752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.467083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.467125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.467524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.467566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.467797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.467860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.468251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.468294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.468615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.468635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.468958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.469000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.469378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.469426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.469804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.469858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.470160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.470202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.470603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.482 [2024-07-15 11:55:58.470646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.482 qpair failed and we were unable to recover it. 00:29:30.482 [2024-07-15 11:55:58.471072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.471114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.471485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.471527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.471874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.471917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.472320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.472339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.472627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.472646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.472937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.472957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.473278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.473297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.473659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.473678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.473925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.473944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.474262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.474282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.474607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.474626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.474944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.474963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.475325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.475345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.475705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.475725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.475998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.476017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.476276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.476295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.476617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.476636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.476931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.476951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.477290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.477309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.477591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.477610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.477879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.477898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.478237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.478256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.478581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.478600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.478929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.478949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.479214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.479233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.479576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.479596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.479873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.479892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.480212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.480232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.480481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.480500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.480776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.480795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.480976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.480996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.481338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.481357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.481647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.481665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.482011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.482030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.482297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.482315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.482583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.482603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.482941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.482963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.483312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.483332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.483665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.483685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.483961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.483980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.484302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.483 [2024-07-15 11:55:58.484322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.483 qpair failed and we were unable to recover it. 00:29:30.483 [2024-07-15 11:55:58.484563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.484582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.484846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.484866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.485124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.485143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.485467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.485486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.485844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.485863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.486128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.486146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.486415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.486434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.486780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.486799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.487153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.487173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.487467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.487486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.487806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.487827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.488135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.488155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.488478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.488497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.488862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.488881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.489164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.489184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.489503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.489522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.489902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.489945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.490371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.490414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.490699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.490718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.491019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.491061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.491386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.491429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.491817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.491843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.492189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.492231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.492552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.492594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.492919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.492962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.493264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.493306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.493613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.493654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.493957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.484 [2024-07-15 11:55:58.493999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.484 qpair failed and we were unable to recover it. 00:29:30.484 [2024-07-15 11:55:58.494395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.494437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.494845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.494892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.495215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.495256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.495700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.495742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.496086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.496128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.496443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.496484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.496882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.496924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.497323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.497370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.497697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.497739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.498115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.498158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.498524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.498543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.498810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.498828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.499178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.499220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.499564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.499605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.499995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.500037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.500425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.500467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.500805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.500858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.501267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.501309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.501640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.501659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.502001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.502021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.502370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.502411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.502813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.502865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.503202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.503245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.503656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.503698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.504020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.504063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.504436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.504477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.504853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.504896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.505267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.505308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.505684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.505726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.506118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.506165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.506443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.506462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.506710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.506747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.507146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.507188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.507548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.507590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.507940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.507983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.508357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.508408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.508801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.508854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.509101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.509145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.509461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.509481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.509829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.509883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.485 [2024-07-15 11:55:58.510196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.485 [2024-07-15 11:55:58.510238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.485 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.510632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.510673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.511073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.511131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.511524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.511566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.511941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.511984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.512309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.512351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.512681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.512723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.513119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.513168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.513503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.513545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.513965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.514008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.514354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.514396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.514785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.514827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.515239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.515282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.515619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.515661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.515907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.515950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.516261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.516302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.516610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.516651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.516956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.516998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.517388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.517430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.517802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.517852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.518205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.518247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.518618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.518637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.518938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.518980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.519305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.519348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.519663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.519705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.520077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.520120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.520515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.520569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.520843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.520862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.521243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.521285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.521511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.521552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.521875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.521917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.522214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.522256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.522654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.522697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.523069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.523111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.523503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.523546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.523888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.523931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.524252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.524294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.524615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.524657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.525050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.525093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.525430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.525472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.525869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.525911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.486 [2024-07-15 11:55:58.526312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.486 [2024-07-15 11:55:58.526353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.486 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.526718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.526760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.527091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.527134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.527530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.527572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.527967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.528011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.528398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.528440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.528767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.528814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.529243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.529286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.529601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.529619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.529969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.530011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.530413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.530455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.530827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.530879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.531183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.531225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.531622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.531664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.532002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.532045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.532460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.532502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.532880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.532922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.533321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.533363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.533664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.533706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.533941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.533984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.534220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.534262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.534632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.534672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.535058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.535101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.535399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.535453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.535875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.535918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.536295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.536337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.536732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.536774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.537033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.537076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.537466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.537508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.537911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.537953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.538348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.538390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.538779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.538821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.539228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.539269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.539671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.539713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.540013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.540056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.540454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.540496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.540880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.540900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.541186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.541227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.541623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.541664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.542058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.542101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.542427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.542468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.487 qpair failed and we were unable to recover it. 00:29:30.487 [2024-07-15 11:55:58.542782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.487 [2024-07-15 11:55:58.542802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.543155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.543173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.543522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.543564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.543959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.544002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.544307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.544349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.544744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.544792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.545048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.545090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.545484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.545525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.545847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.545889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.546266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.546308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.546704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.546746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.547052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.547094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.547434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.547476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.547895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.547938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.548288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.548330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.548566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.548585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.548930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.548973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.549367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.549408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.549717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.549757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.550180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.550222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.550592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.550612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.550911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.550954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.551271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.551313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.551637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.551678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.552082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.552125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.552442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.552484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.552884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.552927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.553241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.553283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.553682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.553723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.554070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.554111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.554481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.554501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.554819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.554844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.555109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.555129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.555420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.555439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.488 [2024-07-15 11:55:58.555786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.488 [2024-07-15 11:55:58.555828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.488 qpair failed and we were unable to recover it. 00:29:30.489 [2024-07-15 11:55:58.556156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.489 [2024-07-15 11:55:58.556197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.489 qpair failed and we were unable to recover it. 00:29:30.489 [2024-07-15 11:55:58.556593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.489 [2024-07-15 11:55:58.556612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.489 qpair failed and we were unable to recover it. 00:29:30.489 [2024-07-15 11:55:58.556930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.489 [2024-07-15 11:55:58.556950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.489 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.560214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.560236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.560602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.560621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.560946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.560965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.561304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.561323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.561644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.561662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.561921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.561941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.562294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.562313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.562573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.562631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.562955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.562997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.563390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.563431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.563802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.563820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.564174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.564216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.564612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.564663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.565012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.565055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.565381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.565423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.565807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.565870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.566267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.566309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.566681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.566724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.567096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.567139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.567527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.567547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.567847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.567890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.568290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.568332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.568723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.568742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.569006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.569048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.569458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.569499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.569750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.763 [2024-07-15 11:55:58.569769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.763 qpair failed and we were unable to recover it. 00:29:30.763 [2024-07-15 11:55:58.570103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.570145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.570471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.570512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.570886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.570930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.571243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.571285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.571518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.571537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.571827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.571879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.572282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.572325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.572615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.572656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.573057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.573100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.573496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.573539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.573882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.573925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.574339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.574386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.574734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.574776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.575185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.575227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.575469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.575489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.575754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.575807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.576189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.576231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.576636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.576678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.577054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.577097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.577472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.577515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.577911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.577954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.578349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.578395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.578771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.578791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.579080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.579122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.579510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.579559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.579922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.579964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.580277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.580319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.580622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.580663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.581053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.581095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.581410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.581451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.581754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.581775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.582023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.582042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.582376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.582418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.582814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.582866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.583194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.583236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.583479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.583521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.583900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.583943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.584338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.584380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.584675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.584694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.584942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.764 [2024-07-15 11:55:58.584981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.764 qpair failed and we were unable to recover it. 00:29:30.764 [2024-07-15 11:55:58.585405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.585447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.585812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.585875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.586268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.586310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.586703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.586744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.587145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.587187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.587580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.587622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.587935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.587978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.588371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.588424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.588776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.588818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.589230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.589272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.589620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.589662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.590061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.590105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.590404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.590446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.590818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.590871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.591198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.591240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.591464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.591505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.591805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.591860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.592090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.592133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2137235 Killed "${NVMF_APP[@]}" "$@" 00:29:30.765 [2024-07-15 11:55:58.592452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.592495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.592871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.592891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 11:55:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:30.765 [2024-07-15 11:55:58.593232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.593253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.593503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.593523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 11:55:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.593844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 11:55:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:30.765 [2024-07-15 11:55:58.593865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 11:55:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:30.765 [2024-07-15 11:55:58.594209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.594230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 11:55:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.765 [2024-07-15 11:55:58.594475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.594496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.594845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.594865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.595126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.595146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.595417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.595436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.595756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.595775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.596048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.596068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.596411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.596431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.596776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.596795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.597173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.597197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.597454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.597496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.597820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.597887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.598213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.598258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.598546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.598587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.598980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.765 [2024-07-15 11:55:58.599023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.765 qpair failed and we were unable to recover it. 00:29:30.765 [2024-07-15 11:55:58.599369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.599413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.599747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.599791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.600198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.600241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.600578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.600620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.601062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.601083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.601354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.601397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.601767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.601810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.602127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.602169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.602500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.602542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.602858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.602901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.603275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.603317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 11:55:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2138057 00:29:30.766 [2024-07-15 11:55:58.603725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.603770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 11:55:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2138057 00:29:30.766 [2024-07-15 11:55:58.604121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.604165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 11:55:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:30.766 11:55:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2138057 ']' 00:29:30.766 [2024-07-15 11:55:58.604532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.604575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 11:55:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.766 [2024-07-15 11:55:58.604949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.604994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 11:55:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:30.766 [2024-07-15 11:55:58.605283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.605327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 11:55:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.766 [2024-07-15 11:55:58.605650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.605696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 11:55:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:30.766 11:55:58 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.766 [2024-07-15 11:55:58.606100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.606145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.606462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.606507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.606865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.606910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.607237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.607280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.607524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.607567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.607939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.607984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.608360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.608409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.608751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.608797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.609093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.609114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.609388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.609409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.609813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.609872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.610200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.610242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.610512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.610556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.610870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.610915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.611221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.611264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.611679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.611721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.612083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.612127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.612478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.612521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.612900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.766 [2024-07-15 11:55:58.612945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.766 qpair failed and we were unable to recover it. 00:29:30.766 [2024-07-15 11:55:58.613262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.613305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.613695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.613737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.614044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.614090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.614428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.614472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.614722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.614764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.615098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.615142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.615391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.615433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.615691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.615714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.615983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.616027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.616328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.616369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.616703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.616745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.617177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.617221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.617478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.617522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.617826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.617883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.618226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.618269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.618591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.618632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.619051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.619095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.619348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.619391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.619784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.619826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.620220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.620264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.620525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.620568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.620970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.621016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.621393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.621437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.621820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.621889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.622242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.622286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.622645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.622666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.623046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.623066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.623324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.623344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.623698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.623741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.623966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.624009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.624312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.624357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.767 qpair failed and we were unable to recover it. 00:29:30.767 [2024-07-15 11:55:58.624638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.767 [2024-07-15 11:55:58.624680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.624931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.624976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.625254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.625296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.625640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.625682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.626109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.626153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.626389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.626435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.626811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.626867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.627268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.627309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.627565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.627606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.628026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.628048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.628413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.628432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.628719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.628739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.629077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.629121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.629453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.629496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.629872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.629916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.630221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.630264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.630579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.630633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.630942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.630986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.631217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.631260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.631637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.631680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.632007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.632051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.632387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.632431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.632738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.632781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.633163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.633184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.633410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.633452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.633824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.633888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.634263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.634305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.634629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.634672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.634867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.634911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.635220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.635241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.635443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.635463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.635782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.635824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.636185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.636227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.636627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.636669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.636933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.636975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.637319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.637361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.637679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.637721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.638111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.638154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.638481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.638525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.638867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.638911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.639249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.768 [2024-07-15 11:55:58.639291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.768 qpair failed and we were unable to recover it. 00:29:30.768 [2024-07-15 11:55:58.639521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.639542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.639818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.639873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.640198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.640241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.640637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.640680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.641074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.641118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.641441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.641495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.641790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.641871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.642125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.642168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.642543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.642585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.642982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.643027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.643362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.643410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.643525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.643545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.643728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.643770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.644132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.644175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.644492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.644534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.644936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.644985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.645315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.645358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.645731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.645773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.645977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.646020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.646339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.646380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.646608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.646650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.646969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.646989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.647135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.647154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.647504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.647545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.647736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.647777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.648200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.648243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.648619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.648661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.648948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.648989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.649312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.649353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.649764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.649806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.650221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.650263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.650564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.650606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.650861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.650905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.651274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.651316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.651678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.651697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.651814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.651839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.652155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.652175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.652370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.652389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.652652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.652705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.653113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.653156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.769 qpair failed and we were unable to recover it. 00:29:30.769 [2024-07-15 11:55:58.653555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.769 [2024-07-15 11:55:58.653596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.653897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.653940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.654282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.654324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.654713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.654754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.654938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.654957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.655217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.655235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.655555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.655596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.655946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.655988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.656291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.656332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.656527] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:29:30.770 [2024-07-15 11:55:58.656587] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.770 [2024-07-15 11:55:58.656706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.656748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.657056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.657097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.657488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.657507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.657759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.657801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.658191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.658233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.658478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.658521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.658797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.658817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.659095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.659137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.659454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.659496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.659888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.659932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.660116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.660158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.660556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.660599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.660970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.661013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.661334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.661376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.661598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.661618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.661904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.661947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.662246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.662288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.662660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.662701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.663080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.663102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.663359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.663378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.663639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.663657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.663860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.663880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.664267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.664289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.664481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.664500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.664618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.664637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.664908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.664950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.665277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.665320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.665721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.665762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.666211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.666231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.666546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.666565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.666838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.770 [2024-07-15 11:55:58.666857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.770 qpair failed and we were unable to recover it. 00:29:30.770 [2024-07-15 11:55:58.667128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.667170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.667598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.667640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.667940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.667983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.668222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.668264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.668653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.668694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.669083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.669125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.669514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.669555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.669890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.669909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.670050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.670070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.670429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.670470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.670860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.670920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.671183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.671237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.671475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.671516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.671903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.671946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.672195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.672237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.672548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.672590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.672854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.672897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.673289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.673331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.673653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.673694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.674032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.674051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.674322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.674363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.674661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.674703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.674918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.674937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.675255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.675297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.675543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.675585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.675971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.675991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.676293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.676336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.676672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.676729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.676916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.676936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.677227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.677268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.677567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.677608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.677854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.677873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.678010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.678050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.678441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.678483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.678706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.678725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.679029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.679071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.679281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.679323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.771 [2024-07-15 11:55:58.679548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.771 [2024-07-15 11:55:58.679589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.771 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.679898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.679941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.680319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.680361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.680730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.680771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.681134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.681177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.681470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.681512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.681735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.681776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.682103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.682146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.682440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.682482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.682791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.682843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.683120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.683140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.683405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.683459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.683855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.683898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.684185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.684203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.684469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.684520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.684743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.684784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.685179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.685222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.685454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.685496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.685668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.685709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.685956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.685976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.686152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.686172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.686443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.686484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.686748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.686767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.687007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.687027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.687347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.687388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.687798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.687850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.688146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.688189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.688418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.688459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.688768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.688787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.689253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.689279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.689649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.689672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.689859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.689879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.690073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.690092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.690232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.690250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.690376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.690394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.690648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.690667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.690997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.691016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.691258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.772 [2024-07-15 11:55:58.691277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.772 qpair failed and we were unable to recover it. 00:29:30.772 [2024-07-15 11:55:58.691448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.691467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.691671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.691690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.691962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.692005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.692313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.692354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.692613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.692654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.692949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.692991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.693240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.693282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.693533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.693573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.693889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.693932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.694260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.694301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.694634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.694676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.694924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.694967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.695283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.695325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 EAL: No free 2048 kB hugepages reported on node 1 00:29:30.773 [2024-07-15 11:55:58.695556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.695598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.695954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.695973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.696100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.696119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.696384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.696424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.696653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.696694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.697007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.697026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.697268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.697287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.697461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.697479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.697678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.697718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.698030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.698072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.698366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.698407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.698707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.698748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.699020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.699070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.699253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.699272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.699560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.699579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.699872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.699891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.700131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.700149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.700343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.700362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.700568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.700586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.700865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.700886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.701076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.701094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.701292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.701311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.701583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.701601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.701800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.701819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.702068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.702087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.702395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.702414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.702592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.773 [2024-07-15 11:55:58.702610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.773 qpair failed and we were unable to recover it. 00:29:30.773 [2024-07-15 11:55:58.702864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.702883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.703148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.703166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.703403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.703422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.703669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.703688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.703935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.703954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.704124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.704142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.704424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.704442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.704770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.704789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.705049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.705067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.705315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.705333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.705432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.705450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.705688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.705706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.705873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.705892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.706134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.706153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.706460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.706478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.706674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.706693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.706934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.706953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.707286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.707305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.707474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.707493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.707744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.707762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.708002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.708020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.708115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.708132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.708505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.708523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.708830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.708853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.709107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.709126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.709386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.709404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.709681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.709699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.709873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.709893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.710153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.710172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.710454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.710473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.710810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.710828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.711141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.711160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.711279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.711299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.711625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.711643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.711848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.711867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.712060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.712078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.712407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.712425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.712540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.712558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.712795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.712813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.712986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.713006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.774 [2024-07-15 11:55:58.713190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.774 [2024-07-15 11:55:58.713210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.774 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.713547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.713566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.713737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.713755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.713940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.713960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.714156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.714175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.714435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.714454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.714764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.714782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.715040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.715058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.715260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.715278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.715461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.715480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.715714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.715732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.716001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.716020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.716278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.716297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.716554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.716572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.716828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.716851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.717026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.717044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.717246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.717264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.717595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.717614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.717810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.717831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.718106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.718125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.718393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.718411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.718649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.718667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.719024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.719044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.719370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.719388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.719712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.719731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.719984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.720003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.720123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.720141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.720392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.720411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.720533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.720552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.720828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.720851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.721156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.721174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.721427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.721445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.721697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.721717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.721906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.721923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.722101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.722120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.722364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.722383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.722663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.722682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.722920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.722939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.723246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.723266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.723571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.723589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.723847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.775 [2024-07-15 11:55:58.723865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.775 qpair failed and we were unable to recover it. 00:29:30.775 [2024-07-15 11:55:58.724138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.724156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.724332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.724350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.724680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.724698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.725009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.725028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.725293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.725311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.725425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.725444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.725766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.725785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.726113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.726132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.726462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.726480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.726717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.726736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.726999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.727017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.727344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.727363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.727605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.727623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.727928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.727946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.728273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.728291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.728509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.728528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.728792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.728812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1284000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.729165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.729198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.729456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.729473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.729588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.729603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.729839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.729854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.730111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.730125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.730461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.730475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.730658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.730672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.730999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.731014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.731200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.731214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.731399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.731413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.731706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.731721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.731971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.731985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.732201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.732215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.732511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.732527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.732823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.732842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.733039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.733054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.776 [2024-07-15 11:55:58.733288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.776 [2024-07-15 11:55:58.733302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.776 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.733462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.733475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.733661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.733676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.733994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.734009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.734245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.734260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.734506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.734520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.734703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.734717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.735020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.735034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.735338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.735352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.735595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.735609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.735842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.735856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.736174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.736188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.736427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.736442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.736679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.736693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.736938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.736952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.737261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.737275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.737544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.737559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.737841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.737854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.738088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.738102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.738328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.738343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.738585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.738599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.738838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.738853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.739184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.739197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.739405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.739419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.739670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.739683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.739922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.739936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.740196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.740210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.740536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.740551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.740738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.740752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.740994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.741008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.741329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.741343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.741581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.741595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.741888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.741902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.742137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.742151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.742415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.742429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.742673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.742688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.743003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.743028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.743267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.743281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.743523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.743539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.743782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.743795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.744058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.777 [2024-07-15 11:55:58.744072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.777 qpair failed and we were unable to recover it. 00:29:30.777 [2024-07-15 11:55:58.744242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.744255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.744483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.744497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.744735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.744749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.744994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.745008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.745233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.745247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.745602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.745615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.745897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.745911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.746076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.746090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.746329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.746344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.746654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.746668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.746918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.746932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.747233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.747247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.747495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.747509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.747676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.747690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.747914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.747928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.748285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.748299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.748533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.748547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.748851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.748865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.749052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.749066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.749328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.749342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.749586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.749600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.749865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.749879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.750176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.750200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.750448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.750462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.750707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.750720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.750928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.750942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.751169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.751182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.751340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.751354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.751532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.751547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.751798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.751812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.751818] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:30.778 [2024-07-15 11:55:58.752118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.752133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.752378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.752391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.752631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.752646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.752940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.752954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.753193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.753206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.753435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.753449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.753742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.753755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.753876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.753890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.754232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.754245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.754429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.754443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.778 qpair failed and we were unable to recover it. 00:29:30.778 [2024-07-15 11:55:58.754685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.778 [2024-07-15 11:55:58.754699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.754995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.755008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.755237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.755251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.755489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.755503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.755768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.755782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.756077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.756091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.756350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.756363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.756608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.756622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.756881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.756896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.757088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.757102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.757424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.757439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.757678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.757692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.757865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.757880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.758216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.758229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.758457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.758471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.758698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.758711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.758882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.758896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.759189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.759203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.759445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.759460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.759639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.759653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.759922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.759938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.760242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.760257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.760544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.760557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.760784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.760799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.760977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.760991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.761246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.761260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.761521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.761535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.761778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.761792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.762034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.762048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.762358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.762371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.762541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.762555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.762860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.762874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.763125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.763139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.763431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.763445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.763750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.763764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.764084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.764098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.764347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.764361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.764666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.764680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.764839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.764852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.765089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.765103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.779 [2024-07-15 11:55:58.765428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.779 [2024-07-15 11:55:58.765441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.779 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.765612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.765626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.765940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.765954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.766298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.766312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.766482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.766495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.766810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.766824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.767005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.767019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.767257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.767270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.767459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.767473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.767739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.767753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.767981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.767996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.768325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.768350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.768526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.768539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.768846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.768860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.769040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.769055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.769281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.769294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.769550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.769563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.769814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.769828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.770061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.770074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.770230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.770243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.770516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.770529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.770754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.770767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.770992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.771006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.771261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.771275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.771448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.771462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.771635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.771648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.771965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.771978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.772220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.772234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.772421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.772434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.772725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.772738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.772984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.772997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.773311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.773324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.773568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.773582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.773806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.773820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.780 [2024-07-15 11:55:58.774011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.780 [2024-07-15 11:55:58.774025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.780 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.774250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.774263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.774554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.774567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.774829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.774848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.775077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.775101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.775345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.775358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.775604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.775618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.775910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.775923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.776153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.776166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.776483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.776496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.776811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.776824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.777140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.777154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.777412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.777426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.777651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.777664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.777893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.777907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.778200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.778213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.778440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.778453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.778703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.778718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.779025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.779039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.779358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.779372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.779536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.779549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.779776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.779802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.779911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.779925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.780245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.780259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.780582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.780596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.780886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.780900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.781249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.781263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.781529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.781543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.781781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.781794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.782041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.782054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.782216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.782229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.782546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.782561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.782736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.782749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.782975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.782989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.783174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.783188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.783446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.783459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.783722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.783735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.784049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.784063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.784242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.784256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.784481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.784494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.784685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.781 [2024-07-15 11:55:58.784698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.781 qpair failed and we were unable to recover it. 00:29:30.781 [2024-07-15 11:55:58.784989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.785003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.785278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.785292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.785609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.785625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.785800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.785819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.786019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.786035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.786362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.786379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.786625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.786643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.786867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.786885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.787137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.787152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.787493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.787513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.787858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.787874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.788055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.788072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.788286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.788300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.788466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.788480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.788661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.788676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.788856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.788871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.789130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.789144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.789333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.789348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.789601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.789614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.789765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.789780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.790099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.790115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.790363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.790377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.790549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.790563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.790863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.790877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.791100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.791114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.791338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.791353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.791545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.791560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.791822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.791839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.792136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.792150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.792471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.792486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.792797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.792812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.792994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.793010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.793197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.793221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.793469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.793484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.793810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.793824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.794067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.794081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.794374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.794389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.794705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.794719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.794984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.794999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.795174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.795188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.782 [2024-07-15 11:55:58.795349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.782 [2024-07-15 11:55:58.795362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.782 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.795655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.795669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.795853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.795870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.796098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.796111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.796363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.796376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.796621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.796634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.796928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.796941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.797235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.797248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.797539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.797552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.797852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.797866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.798096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.798109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.798362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.798375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.798707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.798720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.799058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.799072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.799329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.799342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.799597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.799610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.799849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.799863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.800124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.800137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.800383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.800397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.800718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.800741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.801048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.801062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.801306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.801319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.801637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.801650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.801942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.801956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.802248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.802262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.802563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.802577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.802810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.802823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.803150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.803163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.803456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.803469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.803707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.803720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.803945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.803959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.804221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.804234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.804496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.804510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.804740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.804753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.804929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.804943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.805236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.805249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.805496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.805509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.805825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.805841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.806076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.806090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.806343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.806356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.806660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.806673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.783 qpair failed and we were unable to recover it. 00:29:30.783 [2024-07-15 11:55:58.806993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.783 [2024-07-15 11:55:58.807007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.807300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.807316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.807608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.807621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.807921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.807935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.808256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.808270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.808487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.808500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.808793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.808806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.809104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.809118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.809376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.809389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.809693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.809707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.810033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.810046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.810362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.810375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.810689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.810703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.810945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.810959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.811262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.811276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.811531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.811545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.811860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.811874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.812190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.812203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.812516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.812529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.812829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.812846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.813069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.813083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.813309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.813322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.813551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.813565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.813904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.813919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.814258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.814271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.814518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.814531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.814855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.814871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.815132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.815145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.815478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.815491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.815812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.815827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.816097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.816111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.816376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.816388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.816698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.816711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.817025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.817038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.817332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.817346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.817659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.817672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.817913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.817927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.784 qpair failed and we were unable to recover it. 00:29:30.784 [2024-07-15 11:55:58.818232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.784 [2024-07-15 11:55:58.818245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.818565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.818579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.818894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.818909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.819147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.819160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.819410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.819426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.819653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.819666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.819949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.819963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.820269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.820283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.820599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.820612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.820948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.820961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.821305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.821320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.821613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.821627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.821853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.821867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.822171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.822185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.822455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.822469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.822779] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.785 [2024-07-15 11:55:58.822807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.822814] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.785 [2024-07-15 11:55:58.822819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 [2024-07-15 11:55:58.822825] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.822840] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.785 [2024-07-15 11:55:58.822851] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.785 [2024-07-15 11:55:58.822970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:30.785 [2024-07-15 11:55:58.823176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.823061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:30.785 [2024-07-15 11:55:58.823189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 [2024-07-15 11:55:58.823148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.823149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:30.785 [2024-07-15 11:55:58.823515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.823528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.823766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.823780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.824098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.824112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.824356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.824370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.824529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.824543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.824793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.824806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.825061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.825075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.825327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.825340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.825601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.825614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.825863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.825877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.826194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.826207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.826570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.826584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.826876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.826890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.827223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.827237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.827473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.785 [2024-07-15 11:55:58.827487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.785 qpair failed and we were unable to recover it. 00:29:30.785 [2024-07-15 11:55:58.827737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.827751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.828073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.828087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.828328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.828341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.828609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.828623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.828812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.828826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.829130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.829143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.829404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.829417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.829639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.829652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.829972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.829986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.830165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.830178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.830453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.830467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.830804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.830818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.830858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x92b1f0 (9): Bad file descriptor 00:29:30.786 [2024-07-15 11:55:58.831130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.831159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.831519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.831537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.831847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.831865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.832169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.832187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.832437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.832455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.832663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.832681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.833031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.833049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.833376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.833394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.833742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.833760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.834103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.834118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.834462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.834475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.834812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.834825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.835025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.835039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.835285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.835299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.835610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.835624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.835852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.835866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.836105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.836120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.836442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.836458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.836779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.836794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.837144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.837159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.837387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.837401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.837668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.837683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.837920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.837934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.838180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.838194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.838423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.838437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.838730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.838746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.839068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.839082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.839406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.839420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.786 [2024-07-15 11:55:58.839653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.786 [2024-07-15 11:55:58.839666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.786 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.839841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.839855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.840125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.840139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.840481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.840495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.840840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.840855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.841149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.841164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.841481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.841496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.841814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.841828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.842176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.842190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.842534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.842548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.842788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.842802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.843121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.843135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.843467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.843481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.843641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.843655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.843893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.843907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.844232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.844246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.844506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.844519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.844753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.844767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.845105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.845119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.845421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.845435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.845741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.845755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.845980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.845995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.846311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.846329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.846583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.846598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.846903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.846919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.847143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.847158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.847478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.847494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.847808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.847823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.848156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.848171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.848516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.848533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.848875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.848891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.849116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.849131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.849473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.849489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.849765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.849781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.850077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.850092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.850273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.850287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.850531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.850546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.850711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.850724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.850960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.850975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.851239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.851253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.851546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.851561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.787 qpair failed and we were unable to recover it. 00:29:30.787 [2024-07-15 11:55:58.851790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.787 [2024-07-15 11:55:58.851803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.788 qpair failed and we were unable to recover it. 00:29:30.788 [2024-07-15 11:55:58.852130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.788 [2024-07-15 11:55:58.852144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.788 qpair failed and we were unable to recover it. 00:29:30.788 [2024-07-15 11:55:58.852318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.788 [2024-07-15 11:55:58.852331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.788 qpair failed and we were unable to recover it. 00:29:30.788 [2024-07-15 11:55:58.852626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.788 [2024-07-15 11:55:58.852641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.788 qpair failed and we were unable to recover it. 00:29:30.788 [2024-07-15 11:55:58.852880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.788 [2024-07-15 11:55:58.852894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.788 qpair failed and we were unable to recover it. 00:29:30.788 [2024-07-15 11:55:58.853211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.788 [2024-07-15 11:55:58.853225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.788 qpair failed and we were unable to recover it. 00:29:30.788 [2024-07-15 11:55:58.853382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.788 [2024-07-15 11:55:58.853397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.788 qpair failed and we were unable to recover it. 00:29:30.788 [2024-07-15 11:55:58.853662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.788 [2024-07-15 11:55:58.853676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.788 qpair failed and we were unable to recover it. 00:29:30.788 [2024-07-15 11:55:58.853990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.788 [2024-07-15 11:55:58.854004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.788 qpair failed and we were unable to recover it. 00:29:30.788 [2024-07-15 11:55:58.854207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.788 [2024-07-15 11:55:58.854221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:30.788 qpair failed and we were unable to recover it. 00:29:31.063 [2024-07-15 11:55:58.854458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.063 [2024-07-15 11:55:58.854473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.063 qpair failed and we were unable to recover it. 00:29:31.063 [2024-07-15 11:55:58.854701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.063 [2024-07-15 11:55:58.854715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.063 qpair failed and we were unable to recover it. 00:29:31.063 [2024-07-15 11:55:58.854969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.063 [2024-07-15 11:55:58.854984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.063 qpair failed and we were unable to recover it. 00:29:31.063 [2024-07-15 11:55:58.855208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.855222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.855466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.855480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.855739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.855752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.856001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.856015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.856312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.856325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.856558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.856572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.856797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.856811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.857135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.857149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.857467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.857484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.857711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.857725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.858017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.858031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.858350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.858364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.858663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.858677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.858925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.858939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.859231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.859246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.859509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.859523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.859751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.859765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.860104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.860118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.860276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.860289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.860552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.860565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.860884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.860898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.861226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.861239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.861419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.861433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.861684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.861698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.861992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.862006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.862328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.862341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.862578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.862591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.862909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.862923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.863232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.863246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.863487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.863500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.863745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.863759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.864004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.864018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.864311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.864325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.864621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.864635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.864875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.864889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.865210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.865224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.865539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.865552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.865818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.865834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.866098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.866112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.064 [2024-07-15 11:55:58.866444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.064 [2024-07-15 11:55:58.866457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.064 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.866761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.866774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.867069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.867083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.867311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.867325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.867640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.867654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.867893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.867907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.868214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.868229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.868537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.868550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.868874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.868888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.869208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.869226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.869482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.869495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.869836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.869851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.870191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.870205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.870433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.870446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.870740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.870754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.871020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.871034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.871261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.871276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.871598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.871611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.871960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.871973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.872238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.872252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.872503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.872517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.872812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.872826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.873147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.873161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.873482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.873497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.873803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.873817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.874057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.874072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.874388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.874402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.874768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.874783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.875075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.875089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.875433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.875447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.875718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.875733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.876000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.876014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.876313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.876327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.876621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.876635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.876893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.876907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.877132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.877146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.877382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.877396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.877718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.877732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.878048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.878062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.878395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.878410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.878756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.065 [2024-07-15 11:55:58.878770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.065 qpair failed and we were unable to recover it. 00:29:31.065 [2024-07-15 11:55:58.879021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.879035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.879304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.879317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.879566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.879580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.879880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.879894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.880135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.880148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.880470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.880484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.880730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.880743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.881005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.881019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.881241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.881256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.881503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.881517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.881693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.881706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.882060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.882073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.882405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.882419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.882688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.882702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.883003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.883018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.883246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.883260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.883458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.883472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.883693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.883707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.884002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.884016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.884264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.884278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.884463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.884477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.884740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.884754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.884985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.885000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.885252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.885265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.885431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.885445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.885671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.885686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.885931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.885945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.886194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.886208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.886434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.886448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.886769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.886783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.887009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.887024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.887317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.887331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.887672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.887685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.887907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.887920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.888177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.888190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.888485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.888500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.888797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.888811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.889132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.889147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.889394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.889409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.889773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.889787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.890106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.066 [2024-07-15 11:55:58.890120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.066 qpair failed and we were unable to recover it. 00:29:31.066 [2024-07-15 11:55:58.890478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.890492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.890728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.890743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.890970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.890984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.891250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.891264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.891584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.891598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.891952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.891967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.892215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.892229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.892598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.892614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.892931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.892945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.893227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.893241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.893463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.893476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.893698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.893712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.893890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.893904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.894080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.894094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.894433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.894448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.894696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.894710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.895004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.895019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.895291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.895305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.895483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.895497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.895761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.895775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.895933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.895948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.896188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.896201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.896497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.896511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.896838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.896853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.897119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.897133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.897308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.897322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.897588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.897602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.897871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.897885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.898140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.898153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.898379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.898392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.898687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.898701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.898959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.898972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.899150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.899164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.899393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.899408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.899717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.899731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.899971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.899985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.900281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.900294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.900550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.900564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.900741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.900755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.901005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.901018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.067 qpair failed and we were unable to recover it. 00:29:31.067 [2024-07-15 11:55:58.901403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.067 [2024-07-15 11:55:58.901417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.901662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.901676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.901873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.901887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.902155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.902168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.902438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.902451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.902643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.902657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.902905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.902920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.903123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.903139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.903434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.903448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.903722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.903736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.904008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.904024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.904355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.904368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.904556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.904572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.904853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.904867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.905212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.905227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.905456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.905470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.905734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.905748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.906038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.906052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.906348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.906361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.906603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.906616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.906932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.906945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.907186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.907200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.907448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.907461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.907704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.907717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.908058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.908072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.908290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.908304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.908505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.908518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.908764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.908778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.909013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.909027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.909323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.909337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.909610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.909624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.909918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.068 [2024-07-15 11:55:58.909932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.068 qpair failed and we were unable to recover it. 00:29:31.068 [2024-07-15 11:55:58.910123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.910137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.910386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.910399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.910720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.910734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.911053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.911067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.911331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.911345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.911523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.911537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.911843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.911856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.912072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.912086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.912277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.912291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.912598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.912612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.912906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.912919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.913145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.913159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.913419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.913432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.913725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.913739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.914041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.914055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.914373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.914389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.914641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.914655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.914913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.914927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.915173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.915188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.915432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.915446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.915672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.915686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.915950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.915964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.916282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.916295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.916553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.916567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.916864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.916878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.917145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.917159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.917475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.917489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.917755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.917769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.918048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.918061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.918286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.918299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.918536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.918550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.918784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.918797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.919047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.919061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.919292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.919306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.919534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.919548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.919884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.919898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.920151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.920164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.920494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.920508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.920696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.920710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.921022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.069 [2024-07-15 11:55:58.921036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.069 qpair failed and we were unable to recover it. 00:29:31.069 [2024-07-15 11:55:58.921331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.921344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.921646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.921660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.922046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.922092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.922391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.922421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.922678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.922696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.923001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.923020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.923278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.923296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.923579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.923597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.923838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.923856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.924182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.924201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.924447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.924465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.924742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.924760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.924947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.924966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.925272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.925290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.925556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.925574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.925921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.925939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.926130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.926149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.926474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.926492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.926815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.926839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.927130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.927148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.927453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.927471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.927718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.927736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.928034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.928052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.928328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.928346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.928633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.928650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.928909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.928928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.929186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.929204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.929395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.929413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.929748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.929768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.930061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.930080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.930386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.930402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.930767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.930784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.930955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.930972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.931180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.931197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.931436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.931453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.931758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.931775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.932057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.932074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.932355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.932371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.932658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.932675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.932911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.932928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.933139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.070 [2024-07-15 11:55:58.933156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.070 qpair failed and we were unable to recover it. 00:29:31.070 [2024-07-15 11:55:58.933406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.933423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.933623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.933640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.933948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.933965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.934270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.934289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.934563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.934579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.934894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.934911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.935169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.935185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.935515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.935532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.935785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.935802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.936021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.936036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.936278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.936291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.936483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.936496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.936758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.936770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.937081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.937094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.937462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.937475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.937795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.937810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.938135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.938147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.938467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.938480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.938741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.938754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.939018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.939030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.939276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.939289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.939632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.939645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.939950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.939962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.940209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.940222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.940461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.940474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.940768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.940781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.941017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.941030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.941299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.941312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.941544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.941556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.941877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.941890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.942182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.942194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.942352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.942364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.942662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.942675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.942985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.942998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.943330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.943343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.943666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.943679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.943917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.943929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.944196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.944208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.944450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.944463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.944704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.944716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.944953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.071 [2024-07-15 11:55:58.944965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.071 qpair failed and we were unable to recover it. 00:29:31.071 [2024-07-15 11:55:58.945253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.945265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.945607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.945619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.945928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.945940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.946212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.946225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.946464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.946476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.946759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.946771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.947005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.947020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.947203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.947215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.947459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.947472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.947770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.947783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.948082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.948095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.948355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.948368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.948643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.948656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.948938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.948951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.949180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.949194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.949497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.949509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.949804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.949817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.950120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.950132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.950467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.950479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.950715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.950728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.951006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.951018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.951266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.951278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.951570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.951583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.951841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.951853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.952130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.952143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.952457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.952470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.952706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.952718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.953055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.953068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.953290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.953303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.953547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.953559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.953735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.953747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.954002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.954015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.954240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.954252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.954520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.072 [2024-07-15 11:55:58.954532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.072 qpair failed and we were unable to recover it. 00:29:31.072 [2024-07-15 11:55:58.954846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.954858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.955181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.955193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.955528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.955541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.955811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.955823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.956066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.956078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.956381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.956394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.956693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.956705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.957009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.957022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.957262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.957274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.957443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.957456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.957686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.957698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.958035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.958049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.958388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.958400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.958634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.958646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.958925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.958938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.959121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.959134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.959310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.959322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.959629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.959641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.959829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.959845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.960160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.960172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.960415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.960430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.960722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.960735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.961041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.961053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.961310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.961323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.961506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.961518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.961755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.961767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.962092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.962105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.962339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.962351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.962672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.962685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.962938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.962951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.963192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.963206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.963446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.963458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.963777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.963789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.964091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.964104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.964341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.964353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.964595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.964608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.964872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.964886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.965133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.965146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.965461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.965473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.965776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.965788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.966086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.073 [2024-07-15 11:55:58.966099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.073 qpair failed and we were unable to recover it. 00:29:31.073 [2024-07-15 11:55:58.966440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.966453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.966804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.966816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.967148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.967161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.967345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.967357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.967582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.967594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.967871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.967884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.968083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.968095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.968319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.968330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.968522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.968534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.968827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.968843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.969076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.969088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.969379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.969391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.969719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.969731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.970001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.970014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.970183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.970195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.970511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.970524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.970769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.970782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.971090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.971103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.971362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.971375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.971672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.971685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.972005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.972018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.972337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.972352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.972662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.972674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.973006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.973019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.973292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.973305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.973496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.973508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.973831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.973852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.974118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.974131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.974456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.974468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.974784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.974796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.975177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.975190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.975504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.975517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.975830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.975847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.976097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.976110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.976434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.976446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.976720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.976732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.977043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.977056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.977374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.977386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.977630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.977642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.977937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.977950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.074 [2024-07-15 11:55:58.978197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.074 [2024-07-15 11:55:58.978210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.074 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.978437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.978449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.978614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.978626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.978859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.978872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.979086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.979098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.979352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.979365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.979687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.979699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.979970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.979983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.980232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.980245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.980567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.980579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.980900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.980912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.981119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.981132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.981400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.981412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.981595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.981608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.981914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.981927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.982221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.982233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.982550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.982562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.982886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.982899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.983253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.983265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.983556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.983570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.983836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.983849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.984160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.984173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.984332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.984347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.984574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.984586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.984818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.984830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.985165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.985178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.985476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.985488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.985755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.985767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.986091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.986103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.986403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.986415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.986710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.986722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.987056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.987069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.987389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.987401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.987635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.987647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.987941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.987953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.988277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.988290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.988606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.988618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.988937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.988949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.989199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.989212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.989418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.989430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.989731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.075 [2024-07-15 11:55:58.989743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.075 qpair failed and we were unable to recover it. 00:29:31.075 [2024-07-15 11:55:58.990006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.990018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.990259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.990271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.990441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.990453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.990696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.990708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.990972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.990985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.991286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.991299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.991615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.991628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.991944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.991957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.992214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.992226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.992471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.992483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.992660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.992672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.992964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.992977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.993272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.993285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.993533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.993545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.993807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.993819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.994149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.994162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.994463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.994475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.994720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.994732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.995049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.995064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.995396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.995409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.995722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.995734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.995966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.995978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.996224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.996237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.996406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.996418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.996713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.996726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.996974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.996987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.997213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.997225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.997471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.997484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.997831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.997847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.998149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.998161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.998343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.998356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.998679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.998691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.998996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.999009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.999283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.999296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.076 qpair failed and we were unable to recover it. 00:29:31.076 [2024-07-15 11:55:58.999556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.076 [2024-07-15 11:55:58.999568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:58.999890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:58.999903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.000160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.000172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.000341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.000354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.000612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.000624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.000959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.000972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.001214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.001227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.001415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.001428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.001690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.001703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.002042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.002054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.002427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.002440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.002762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.002774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.002945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.002958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.003171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.003183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.003450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.003462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.003705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.003718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.004050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.004063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.004321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.004334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.004536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.004548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.004739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.004751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.004983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.004995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.005254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.005266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.005511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.005523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.005843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.005857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.006084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.006098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.006298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.006309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.006534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.006547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.006783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.006795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.007029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.007041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.007289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.007302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.007496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.007508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.007802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.007814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.008063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.008077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.008257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.008269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.008452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.008465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.008691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.008703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.009042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.009055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.009305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.009318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.009607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.009620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.009917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.009929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.010127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.077 [2024-07-15 11:55:59.010139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.077 qpair failed and we were unable to recover it. 00:29:31.077 [2024-07-15 11:55:59.010365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.010377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.010697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.010710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.010895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.010907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.011161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.011174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.011491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.011503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.011798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.011810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.012120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.012132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.012444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.012457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.012770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.012783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.013011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.013024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.013292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.013305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.013645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.013657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.013848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.013860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.014154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.014166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.014417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.014429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.014608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.014621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.014866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.014879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.015060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.015072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.015314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.015327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.015568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.015581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.015828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.015844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.016165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.016178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.016421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.016433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.016687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.016701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.016929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.016941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.017263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.017275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.017557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.017569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.017827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.017850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.018111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.018123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.018397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.018409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.018686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.018698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.018873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.018885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.019151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.019163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.019456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.019469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.019786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.019798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.020039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.020052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.020379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.020391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.020571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.020583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.020829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.020845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.021100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.021112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.078 [2024-07-15 11:55:59.021281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.078 [2024-07-15 11:55:59.021293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.078 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.021590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.021603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.021837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.021850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.022096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.022109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.022378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.022390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.022584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.022597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.022891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.022903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.023131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.023143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.023324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.023336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.023598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.023610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.023836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.023851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.024098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.024110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.024448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.024461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.024796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.024808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.025052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.025064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.025296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.025308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.025661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.025673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.026015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.026028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.026275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.026288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.026622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.026634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.026891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.026904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.027165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.027178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.027454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.027466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.027764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.027776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.027972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.027985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.028235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.028247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.028427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.028439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.028711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.028723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.029036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.029048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.029312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.029324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.029494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.029506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.029738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.029750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.030024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.030037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.030351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.030363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.030740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.030753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.031021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.031034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.031274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.031287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.031598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.031610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.031867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.031880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.032194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.032206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.032525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.032536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.079 [2024-07-15 11:55:59.032802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.079 [2024-07-15 11:55:59.032815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.079 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.033020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.033032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.033330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.033343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.033605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.033617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.033934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.033946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.034144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.034156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.034316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.034328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.034606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.034618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.034937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.034950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.035196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.035211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.035452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.035464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.035780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.035792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.036111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.036124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.036301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.036313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.036572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.036584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.036764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.036776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.037005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.037017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.037242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.037254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.037500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.037512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.037701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.037714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.037963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.037975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.038202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.038215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.038398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.038410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.038740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.038752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.039024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.039036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.039280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.039292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.039591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.039603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.039926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.039939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.040183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.040195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.040490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.040502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.040765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.040777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.041003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.041015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.041263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.041275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.041576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.041589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.041778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.041791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.041972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.041984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.042294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.042307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.042572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.042584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.042761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.042773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.043010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.043022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.043338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.080 [2024-07-15 11:55:59.043350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.080 qpair failed and we were unable to recover it. 00:29:31.080 [2024-07-15 11:55:59.043660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.043672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.043918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.043930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.044177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.044189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.044483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.044496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.044790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.044802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.045033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.045045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.045341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.045353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.045630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.045642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.045876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.045890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.046049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.046061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.046372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.046384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.046695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.046708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.047056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.047068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.047264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.047276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.047597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.047609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.047874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.047886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.048140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.048152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.048378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.048390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.048680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.048692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.048952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.048964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.049229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.049241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.049578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.049590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.049850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.049862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.050063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.050076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.050338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.050350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.050662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.050675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.050927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.050939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.051164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.051177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.051438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.051451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.051793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.051804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.052130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.052142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.052319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.052331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.052631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.052642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.081 [2024-07-15 11:55:59.052957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.081 [2024-07-15 11:55:59.052969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.081 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.053285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.053298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.053666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.053678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.054008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.054020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.054342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.054354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.054700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.054712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.054952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.054964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.055329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.055342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.055692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.055705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.055957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.055970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.056169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.056182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.056363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.056375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.056631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.056643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.056891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.056903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.057131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.057144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.057440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.057454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.057774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.057786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.058105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.058119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.058381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.058393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.058638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.058650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.058965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.058977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.059215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.059227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.059473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.059486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.059747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.059759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.060102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.060115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.060278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.060290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.060537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.060549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.060722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.060734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.060999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.061011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.061280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.061292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.061534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.061546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.061840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.061853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.062024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.062036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.062304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.062316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.062546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.062558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.062786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.062798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.063071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.063083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.063309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.063321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.063672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.063684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.063942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.063955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.064251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.064264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.082 qpair failed and we were unable to recover it. 00:29:31.082 [2024-07-15 11:55:59.064584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.082 [2024-07-15 11:55:59.064596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.064863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.064876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.065180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.065193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.065439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.065451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.065640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.065652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.065973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.065987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.066164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.066177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.066468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.066480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.066800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.066813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.067100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.067112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.067314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.067327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.067596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.067608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.067945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.067958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.068301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.068314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.068563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.068577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.068794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.068807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.069157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.069169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.069458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.069470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.069771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.069783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.070098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.070111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.070294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.070307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.070644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.070657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.070941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.070953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.071186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.071198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.071597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.071610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.071845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.071857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.072173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.072185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.072503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.072515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.072858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.072872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.073191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.073204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.073487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.073500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.073747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.073759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.073997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.074009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.074200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.074212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.074372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.074383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.074708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.074720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.075034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.075047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.075230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.075242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.075485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.075498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.075792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.075804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.075980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.083 [2024-07-15 11:55:59.075992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.083 qpair failed and we were unable to recover it. 00:29:31.083 [2024-07-15 11:55:59.076321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.076333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.076625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.076637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.076933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.076945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.077211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.077224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.077466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.077478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.077670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.077682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.078055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.078068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.078251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.078263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.078562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.078574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.078802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.078814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.079090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.079102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.079407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.079419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.079741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.079754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.080059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.080074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.080299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.080311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.080618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.080631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.080856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.080868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.081144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.081156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.081349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.081362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.081676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.081688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.081997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.082010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.082210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.082223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.082413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.082425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.082669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.082682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.083006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.083019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.083291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.083303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.083644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.083656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.083921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.083935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.084191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.084203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.084566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.084579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.084883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.084896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.085135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.085148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.085393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.085405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.085671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.085684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.085990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.086002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.086231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.086243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.086562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.086574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.086901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.086913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.087082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.087095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.087281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.084 [2024-07-15 11:55:59.087294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.084 qpair failed and we were unable to recover it. 00:29:31.084 [2024-07-15 11:55:59.087494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.087506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.087757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.087769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.088062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.088074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.088320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.088332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.088671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.088684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.088905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.088917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.089097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.089109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.089369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.089381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.089676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.089688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.090026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.090038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.090238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.090251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.090498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.090511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.090749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.090761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.091002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.091017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.091340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.091353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.091609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.091622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.091963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.091975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.092221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.092233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.092435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.092447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.092760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.092772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.093007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.093021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.093287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.093299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.093558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.093571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.093869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.093881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.094073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.094086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.094263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.094275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.094512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.094524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.094819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.094831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.095028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.095042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.095336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.095349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.095602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.095615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.095874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.095887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.096137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.096150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.085 [2024-07-15 11:55:59.096466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.085 [2024-07-15 11:55:59.096478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.085 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.096746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.096758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.097034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.097047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.097293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.097306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.097553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.097565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.097793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.097806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.098141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.098154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.098452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.098488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.098855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.098875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.099181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.099198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.099365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.099381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.099647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.099664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.099860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.099877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.100064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.100077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.100338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.100351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.100712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.100724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.101022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.101035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.101281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.101294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.101560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.101573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.101745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.101758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.102031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.102047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.102279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.102292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.102580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.102592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.102932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.102945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.103138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.103150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.103387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.103399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.103730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.103743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.104042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.104054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.104297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.104309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.104505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.104517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.104760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.104772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.105025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.105037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.105225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.105237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.105482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.105494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.105810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.105822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.106107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.106119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.106295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.106308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.106583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.106595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.106828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.106851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.107086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.107098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.107292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.086 [2024-07-15 11:55:59.107304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.086 qpair failed and we were unable to recover it. 00:29:31.086 [2024-07-15 11:55:59.107575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.107587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.107840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.107852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.108081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.108093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.108287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.108300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.108522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.108535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.108720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.108733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.108991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.109004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.109255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.109269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.109464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.109477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.109776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.109790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.110033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.110045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.110275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.110288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.110538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.110550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.110855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.110868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.111045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.111057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.111330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.111342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.111628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.111640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.111935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.111947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.112188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.112201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.112471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.112486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.112723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.112736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.112982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.112994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.113195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.113207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.113454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.113466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.113666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.113678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.113853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.113865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.114172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.114184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.114427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.114439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.114685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.114697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.114920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.114933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.115179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.115192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.115315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.115327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.115616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.115629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.115884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.115897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.116187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.116200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.116428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.116440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.116670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.116682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.117037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.117050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.117392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.117404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.117595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.117608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.117948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.087 [2024-07-15 11:55:59.117961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.087 qpair failed and we were unable to recover it. 00:29:31.087 [2024-07-15 11:55:59.118226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.118239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.118566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.118578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.118881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.118894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.119091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.119103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.119350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.119362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.119599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.119611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.119922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.119935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.120236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.120249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.120427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.120439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.120671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.120684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.120982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.120996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.121246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.121259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.121551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.121563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.121868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.121881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.122149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.122161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.122402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.122414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.122588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.122601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.122861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.122873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.123123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.123137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.123339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.123351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.123609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.123622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.123878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.123890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.124046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.124058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.124293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.124305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.124557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.124570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.124851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.124864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.125127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.125140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.125346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.125358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.125614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.125626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.125868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.125880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.126047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.126059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.126235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.126248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.126452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.126464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.126771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.126784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.127053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.127066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.127304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.127316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.127503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.127515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.127778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.127791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.128034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.128046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.128213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.128225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.088 [2024-07-15 11:55:59.128452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.088 [2024-07-15 11:55:59.128465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.088 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.128664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.128676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.128938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.128951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.129202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.129214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.129452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.129465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.129790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.129802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.130054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.130067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.130244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.130258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.130465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.130477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.130793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.130806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.131124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.131137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.131426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.131439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.131781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.131794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.132038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.132050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.132288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.132301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.132564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.132578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.132900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.132913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.133163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.133175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.133494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.133509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.133765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.133777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.133946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.133958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.134161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.134173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.134425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.134438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.134732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.134744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.135052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.135065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.135311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.135323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.135585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.135597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.135870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.135883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.136125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.136137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.136338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.136350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.136678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.136690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.137023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.137036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.137361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.137373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.137697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.137709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.137974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.137987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.089 qpair failed and we were unable to recover it. 00:29:31.089 [2024-07-15 11:55:59.138248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.089 [2024-07-15 11:55:59.138260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.138448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.138461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.138675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.138687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.138930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.138942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.139138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.139151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.139382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.139394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.139648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.139661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.139848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.139861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.140042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.140054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.140228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.140241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.140443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.140456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.140774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.140786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.141038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.141050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.141355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.141367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.141649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.141661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.141910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.141923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.142170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.142183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.142360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.142373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.142642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.142654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.142874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.142886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.143181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.143193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.143421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.143433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.143759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.143771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.144012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.144026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.144183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.144195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.144372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.144385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.144644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.144656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.144894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.144906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.145225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.145237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.145529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.145541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.145786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.145798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.145988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.146000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.146318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.146330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.146621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.146634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.146890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.146903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.147141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.147153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.147330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.147342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.147703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.147715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.148024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.148036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.148263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.148276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.148522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.090 [2024-07-15 11:55:59.148534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.090 qpair failed and we were unable to recover it. 00:29:31.090 [2024-07-15 11:55:59.148767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.091 [2024-07-15 11:55:59.148780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.091 qpair failed and we were unable to recover it. 00:29:31.091 [2024-07-15 11:55:59.149016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.091 [2024-07-15 11:55:59.149029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.091 qpair failed and we were unable to recover it. 00:29:31.091 [2024-07-15 11:55:59.149324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.091 [2024-07-15 11:55:59.149337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.091 qpair failed and we were unable to recover it. 00:29:31.091 [2024-07-15 11:55:59.149613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.091 [2024-07-15 11:55:59.149625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.091 qpair failed and we were unable to recover it. 00:29:31.091 [2024-07-15 11:55:59.149867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.091 [2024-07-15 11:55:59.149879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.091 qpair failed and we were unable to recover it. 00:29:31.091 [2024-07-15 11:55:59.150146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.091 [2024-07-15 11:55:59.150159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.091 qpair failed and we were unable to recover it. 00:29:31.091 [2024-07-15 11:55:59.150409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.091 [2024-07-15 11:55:59.150421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.091 qpair failed and we were unable to recover it. 00:29:31.091 [2024-07-15 11:55:59.150672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.091 [2024-07-15 11:55:59.150684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.091 qpair failed and we were unable to recover it. 00:29:31.091 [2024-07-15 11:55:59.150948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.091 [2024-07-15 11:55:59.150960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.091 qpair failed and we were unable to recover it. 00:29:31.091 [2024-07-15 11:55:59.151232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.091 [2024-07-15 11:55:59.151263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.091 qpair failed and we were unable to recover it. 00:29:31.091 [2024-07-15 11:55:59.151530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.091 [2024-07-15 11:55:59.151548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.091 qpair failed and we were unable to recover it. 00:29:31.091 [2024-07-15 11:55:59.151800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.091 [2024-07-15 11:55:59.151816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.091 qpair failed and we were unable to recover it. 00:29:31.091 [2024-07-15 11:55:59.152159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.091 [2024-07-15 11:55:59.152176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.091 qpair failed and we were unable to recover it. 00:29:31.091 [2024-07-15 11:55:59.152505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.091 [2024-07-15 11:55:59.152521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.091 qpair failed and we were unable to recover it. 00:29:31.091 [2024-07-15 11:55:59.152796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.091 [2024-07-15 11:55:59.152812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.091 qpair failed and we were unable to recover it. 00:29:31.091 [2024-07-15 11:55:59.153165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.091 [2024-07-15 11:55:59.153182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.091 qpair failed and we were unable to recover it. 00:29:31.091 [2024-07-15 11:55:59.153431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.091 [2024-07-15 11:55:59.153448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.091 qpair failed and we were unable to recover it. 00:29:31.369 [2024-07-15 11:55:59.153722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.369 [2024-07-15 11:55:59.153739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.369 qpair failed and we were unable to recover it. 00:29:31.369 [2024-07-15 11:55:59.154020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.369 [2024-07-15 11:55:59.154034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.369 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.154281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.154293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.154595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.154607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.154900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.154912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.155071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.155083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.155360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.155372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.155659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.155672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.155898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.155910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.156154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.156166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.156383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.156395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.156649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.156661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.156908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.156920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.157083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.157095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.157393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.157405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.157690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.157702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.157955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.157968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.158221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.158233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.158437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.158450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.158687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.158699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.158945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.158958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.159281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.159293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.159488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.159500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.159744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.159756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.160091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.160104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.160345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.160357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.160618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.160630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.160944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.160957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.161203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.161216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.161481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.161493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.161811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.161823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.162097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.162109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.162348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.162362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.162603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.162615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.162864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.162877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.163193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.163205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.163458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.163471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.163812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.163824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.370 qpair failed and we were unable to recover it. 00:29:31.370 [2024-07-15 11:55:59.164124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.370 [2024-07-15 11:55:59.164136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.164386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.164398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.164629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.164641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.164884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.164897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.165166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.165179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.165368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.165381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.165693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.165705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.165972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.165984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.166183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.166195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.166486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.166498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.166726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.166739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.167042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.167054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.167228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.167240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.167487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.167500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.167788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.167800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.168023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.168035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.168284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.168297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.168523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.168535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.168855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.168868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.169109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.169121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.169442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.169454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.169722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.169734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.170070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.170083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.170335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.170348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.170615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.170627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.170811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.170823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.171165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.171177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.171470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.171482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.171739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.171752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.172036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.172049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.172282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.172294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.172539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.172551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.172791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.172803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.173065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.173077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.173278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.173293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.173637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.173649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.173836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.173849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.174027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.174040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.174280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.174292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.174523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.174536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.371 [2024-07-15 11:55:59.174811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.371 [2024-07-15 11:55:59.174823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.371 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.175154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.175166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.175345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.175357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.175612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.175625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.175874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.175887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.176138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.176150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.176379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.176392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.176649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.176662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.176959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.176971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.177166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.177178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.177473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.177485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.177758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.177770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.178077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.178089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.178361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.178373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.178605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.178617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.178720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.178733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.179067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.179080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.179376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.179389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.179777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.179790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.180102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.180114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.180359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.180371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.180600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.180613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.180933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.180946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.181063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.181075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.181240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.181252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.181447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.181459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.181776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.181788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.182042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.182054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.182309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.182321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.182517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.182529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.182785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.182798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.182983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.182995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.183177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.183188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.183433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.183445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.183677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.183690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.183871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.183883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.184076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.184089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.184318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.184330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.184569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.184582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.184818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.184835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.185063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.185075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.372 qpair failed and we were unable to recover it. 00:29:31.372 [2024-07-15 11:55:59.185251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.372 [2024-07-15 11:55:59.185262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.185437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.185449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.185618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.185630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.185872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.185885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.186148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.186161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.186395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.186407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.186645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.186657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.186911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.186924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.187146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.187158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.187394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.187406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.187577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.187589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.187815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.187827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.187999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.188011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.188174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.188187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.188419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.188431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.188604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.188616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.188862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.188875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.189125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.189137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.189376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.189387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.189706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.189719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.190018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.190030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.190230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.190242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.190410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.190422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.190607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.190618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.190811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.190823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.191064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.191076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.191248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.191260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.191452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.191464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.191622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.191634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.191872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.191885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.192110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.192122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.192282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.192294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.192459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.192471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.192643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.192656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.192884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.192897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.193067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.193078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.193248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.193260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.193432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.193444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.193626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.193638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.193945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.193958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.373 qpair failed and we were unable to recover it. 00:29:31.373 [2024-07-15 11:55:59.194284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.373 [2024-07-15 11:55:59.194296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.194524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.194537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.194761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.194773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.195013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.195026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.195207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.195219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.195442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.195454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.195623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.195635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.195875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.195888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.196144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.196156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.196384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.196397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.196590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.196602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.196829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.196845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.197036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.197048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.197214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.197226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.197447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.197460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.197726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.197739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.197960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.197973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.198216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.198229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.198414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.198427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.198604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.198617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.198856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.198869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.199059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.199071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.199384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.199397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.199690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.199703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.199870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.199882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.200106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.200118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.200355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.200367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.374 qpair failed and we were unable to recover it. 00:29:31.374 [2024-07-15 11:55:59.200530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.374 [2024-07-15 11:55:59.200543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.200778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.200790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.201036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.201048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.201214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.201226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.201467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.201479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.201752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.201764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.201926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.201943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.202113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.202125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.202281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.202292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.202559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.202571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.202796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.202808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.202983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.202995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.203172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.203185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.203419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.203431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.203746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.203758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.203951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.203963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.204139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.204150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.204334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.204345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.204665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.204677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.204840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.204853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.205021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.205034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.205211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.205223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.205460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.205472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.205738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.205750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.205926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.205938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.206107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.206119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.375 qpair failed and we were unable to recover it. 00:29:31.375 [2024-07-15 11:55:59.206368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.375 [2024-07-15 11:55:59.206380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.206696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.206709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.206978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.206991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.207171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.207184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.207361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.207374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.207609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.207622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.207882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.207894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.208124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.208137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.208318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.208332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.208582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.208594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.208829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.208846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.209018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.209030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.209263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.209276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.209514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.209527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.209753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.209765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.210009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.210022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.210292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.210304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.210556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.210569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.210828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.210852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.211099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.211111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.211299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.211313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.211425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.211436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.211600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.211612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.211860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.211873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.212101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.212113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.212439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.212452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.376 [2024-07-15 11:55:59.212703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.376 [2024-07-15 11:55:59.212714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.376 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.212952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.212964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.213261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.213274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.213513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.213525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.213692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.213704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.213879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.213892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.214191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.214203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.214449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.214462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.214645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.214657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.214882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.214894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.215136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.215148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.215386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.215399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.215590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.215602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.215828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.215845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.216075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.216088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.216340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.216352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.216575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.216587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.216773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.216785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.216955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.216967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.217123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.217134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.217361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.217374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.217623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.217647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.217874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.217892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.218134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.218151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.218332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.218348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.218618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.218635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.218943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.218960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.377 qpair failed and we were unable to recover it. 00:29:31.377 [2024-07-15 11:55:59.219127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.377 [2024-07-15 11:55:59.219143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.219400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.219416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.219653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.219669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.219943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.219962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.220209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.220225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.220478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.220495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.220679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.220695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.220888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.220906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.221161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.221178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.221417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.221433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.221610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.221627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.221803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.221820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.222072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.222088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.222259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.222275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.222480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.222498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.222771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.222785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.223079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.223091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.223325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.223337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.223636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.223649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.223874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.223886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.224094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.224106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.224340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.224354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.224538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.224550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.224873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.224886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.225126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.225139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.225397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.225409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.225590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.378 [2024-07-15 11:55:59.225602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.378 qpair failed and we were unable to recover it. 00:29:31.378 [2024-07-15 11:55:59.225841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.225853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.226019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.226031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.226255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.226268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.226450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.226462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.226642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.226655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.226881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.226893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.227063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.227076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.227234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.227245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.227500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.227513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.227742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.227754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.227998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.228010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.228334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.228347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.228613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.228625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.228799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.228812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.229010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.229023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.229320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.229332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.229639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.229652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.229881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.229893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.230129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.230141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.230305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.230316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.230554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.230567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.230757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.230769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.230992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.231005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.231244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.231256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.231439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.231451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.379 [2024-07-15 11:55:59.231685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.379 [2024-07-15 11:55:59.231697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.379 qpair failed and we were unable to recover it. 00:29:31.380 [2024-07-15 11:55:59.231948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.380 [2024-07-15 11:55:59.231961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.380 qpair failed and we were unable to recover it. 00:29:31.380 [2024-07-15 11:55:59.232143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.380 [2024-07-15 11:55:59.232156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.380 qpair failed and we were unable to recover it. 00:29:31.380 [2024-07-15 11:55:59.232344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.380 [2024-07-15 11:55:59.232356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.380 qpair failed and we were unable to recover it. 00:29:31.380 [2024-07-15 11:55:59.232536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.380 [2024-07-15 11:55:59.232548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.380 qpair failed and we were unable to recover it. 00:29:31.380 [2024-07-15 11:55:59.232864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.380 [2024-07-15 11:55:59.232876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.380 qpair failed and we were unable to recover it. 00:29:31.380 [2024-07-15 11:55:59.233116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.380 [2024-07-15 11:55:59.233128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.380 qpair failed and we were unable to recover it. 00:29:31.380 [2024-07-15 11:55:59.233310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.380 [2024-07-15 11:55:59.233323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.380 qpair failed and we were unable to recover it. 00:29:31.380 [2024-07-15 11:55:59.233577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.380 [2024-07-15 11:55:59.233589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.380 qpair failed and we were unable to recover it. 00:29:31.380 [2024-07-15 11:55:59.233774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.380 [2024-07-15 11:55:59.233787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.380 qpair failed and we were unable to recover it. 00:29:31.380 [2024-07-15 11:55:59.233968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.380 [2024-07-15 11:55:59.233981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.380 qpair failed and we were unable to recover it. 00:29:31.380 [2024-07-15 11:55:59.234136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.380 [2024-07-15 11:55:59.234148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.380 qpair failed and we were unable to recover it. 00:29:31.380 [2024-07-15 11:55:59.234442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.380 [2024-07-15 11:55:59.234454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.380 qpair failed and we were unable to recover it. 00:29:31.380 [2024-07-15 11:55:59.234644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.380 [2024-07-15 11:55:59.234657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.380 qpair failed and we were unable to recover it. 00:29:31.380 [2024-07-15 11:55:59.234826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.380 [2024-07-15 11:55:59.234841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.380 qpair failed and we were unable to recover it. 00:29:31.380 [2024-07-15 11:55:59.235044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.380 [2024-07-15 11:55:59.235056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.380 qpair failed and we were unable to recover it. 00:29:31.380 [2024-07-15 11:55:59.235366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.380 [2024-07-15 11:55:59.235378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.380 qpair failed and we were unable to recover it. 00:29:31.380 [2024-07-15 11:55:59.235628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.380 [2024-07-15 11:55:59.235640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.380 qpair failed and we were unable to recover it. 00:29:31.380 [2024-07-15 11:55:59.235866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.380 [2024-07-15 11:55:59.235880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.380 qpair failed and we were unable to recover it. 00:29:31.380 [2024-07-15 11:55:59.236118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.380 [2024-07-15 11:55:59.236131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.380 qpair failed and we were unable to recover it. 00:29:31.380 [2024-07-15 11:55:59.236313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.380 [2024-07-15 11:55:59.236326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.380 qpair failed and we were unable to recover it. 00:29:31.380 [2024-07-15 11:55:59.236495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.236507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.236740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.236752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.236941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.236953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.237138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.237150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.237328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.237340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.237520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.237532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.237695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.237707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.237946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.237959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.238206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.238219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.238449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.238461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.238689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.238702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.238880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.238892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.239068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.239080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.239261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.239273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.239499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.239511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.239770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.239782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.240018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.240030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.240211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.240223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.240449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.240461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.240685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.240697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.240871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.240884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.241047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.241059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.241347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.241359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.241630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.241642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.241914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.381 [2024-07-15 11:55:59.241927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.381 qpair failed and we were unable to recover it. 00:29:31.381 [2024-07-15 11:55:59.242235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.242247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.242539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.242552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.242779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.242791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.243018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.243035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.243332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.243345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.243518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.243531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.243775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.243788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.244016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.244028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.244186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.244198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.244494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.244507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.244739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.244752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.244915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.244928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.245223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.245235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.245551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.245563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.245786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.245798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.246100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.246112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.246305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.246318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.246557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.246570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.246795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.246808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.246984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.246997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.247244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.247257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.247495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.247508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.247688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.247700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.247896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.247909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.248083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.248096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.248353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.248365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.382 [2024-07-15 11:55:59.248683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.382 [2024-07-15 11:55:59.248695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.382 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.248874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.248887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.248993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.249005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.249183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.249195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.249446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.249459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.249644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.249657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.249884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.249896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.250149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.250161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.250406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.250419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.250659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.250671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.250901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.250914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.251090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.251102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.251263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.251275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.251448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.251461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.251652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.251665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.251889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.251902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.252087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.252099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.252278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.252294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.252520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.252533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.252725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.252738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.253054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.253067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.253298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.253310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.253593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.253606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.253849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.253862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.254127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.254139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.254312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.254324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.254571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.254583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.254837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.254849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.255047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.255060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.255240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.255252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.255478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.255491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.255752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.255764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.256000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.383 [2024-07-15 11:55:59.256013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.383 qpair failed and we were unable to recover it. 00:29:31.383 [2024-07-15 11:55:59.256244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.256256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.256550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.256563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.256737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.256749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.256905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.256918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.257169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.257182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.257360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.257372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.257602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.257614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.257855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.257867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.258164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.258176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.258401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.258413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.258657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.258669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.258834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.258847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.259090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.259102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.259329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.259341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.259565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.259578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.259913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.259925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.260162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.260174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.260401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.260413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.260654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.260665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.260912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.260924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.261226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.261238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.261533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.261545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.261650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.261662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.261840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.261853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.262161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.262174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.262492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.262505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.262605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.262616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.262853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.262865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.263038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.263050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.263297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.263308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.263623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.263635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.263878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.263891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.264184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.264196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.264492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.264504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.264763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.264775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.265021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.384 [2024-07-15 11:55:59.265033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.384 qpair failed and we were unable to recover it. 00:29:31.384 [2024-07-15 11:55:59.265368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.265380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.265543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.265555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.265661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.265673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.265908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.265921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.266170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.266182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.266350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.266362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.266532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.266545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.266792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.266804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.266973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.266986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.267234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.267246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.267513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.267526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.267772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.267784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.267950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.267962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.268188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.268200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.268366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.268377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.268671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.268684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.268909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.268922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.269239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.269251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.269487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.269499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.269751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.269764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.269935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.269947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.270111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.270123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.270292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.270304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.270596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.270608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.270843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.270856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.271106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.271118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.271357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.271369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.271538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.271550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.271795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.271809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.271976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.271988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.272162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.272173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.272351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.272364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.272654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.272666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.272843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.272856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.273044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.273057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.273284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.273296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.273536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.273548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.273784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.273797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.273974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.273986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.274152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.385 [2024-07-15 11:55:59.274164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.385 qpair failed and we were unable to recover it. 00:29:31.385 [2024-07-15 11:55:59.274425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.274437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.274660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.274672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.274843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.274855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.275104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.275117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.275366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.275378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.275623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.275636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.275884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.275897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.276193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.276205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.276506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.276518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.276635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.276647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.276879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.276891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.277136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.277148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.277320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.277332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.277631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.277644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.277870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.277883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.278067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.278079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.278242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.278254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.278508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.278520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.278709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.278722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.278944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.278956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.279154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.279166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.279415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.279427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.279634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.279646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.279805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.279817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.279980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.279993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.280149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.280161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.280338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.280350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.280513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.280526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.280755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.280769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.280955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.280968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.281240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.281253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.281500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.281513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.281685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.281697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.386 [2024-07-15 11:55:59.281955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.386 [2024-07-15 11:55:59.281968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.386 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.282219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.282232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.282459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.282471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.282660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.282673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.282835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.282848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.283024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.283037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.283160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.283172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.283350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.283362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.283599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.283611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.283789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.283802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.283958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.283971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.284199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.284211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.284457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.284469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.284568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.284580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.284735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.284747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.285021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.285034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.285255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.285267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.285535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.285547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.285701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.285714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.285920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.285933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.286225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.286237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.286461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.286474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.286643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.286656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.286884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.286897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.287068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.287081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.287309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.287321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.287553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.287565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.287792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.287804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.287966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.287978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.288161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.288173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.288493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.288506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.288686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.288698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.288964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.288977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.289152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.289165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.289405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.289417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.289602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.289616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.289846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.289858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.290031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.290044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.290207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.290219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.290380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.290393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.290560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.387 [2024-07-15 11:55:59.290572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.387 qpair failed and we were unable to recover it. 00:29:31.387 [2024-07-15 11:55:59.290744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.290756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.290979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.290992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.291148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.291160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.291370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.291382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.291633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.291645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.291870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.291882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.292119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.292131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.292375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.292387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.292614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.292626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.292856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.292869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.293047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.293059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.293356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.293368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.293663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.293675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.293953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.293965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.294198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.294210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.294527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.294539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.294836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.294849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.295039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.295051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.295233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.295245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.295489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.295502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.295667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.295680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.295851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.295864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.296127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.296140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.296332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.296346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.296643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.296656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.296764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.296776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.297007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.297020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.297196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.297208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.297380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.297391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.297630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.297642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.297807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.297819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.298062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.298075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.298233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.298245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.298472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.298485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.298724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.298740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.298970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.298983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.299278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.299290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.299517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.299529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.299720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.299733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.300032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.300045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.388 qpair failed and we were unable to recover it. 00:29:31.388 [2024-07-15 11:55:59.300220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.388 [2024-07-15 11:55:59.300232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.300391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.300404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.300642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.300654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.300882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.300895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.301145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.301157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.301319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.301331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.301533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.301545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.301797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.301809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.301990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.302002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.302235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.302248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.302491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.302504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.302744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.302756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.302995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.303007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.303231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.303243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.303418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.303430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.303544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.303556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.303782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.303794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.303959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.303971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.304229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.304241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.304539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.304551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.304732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.304744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.304846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.304859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.305097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.305109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.305284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.305296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.305454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.305466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.305699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.305712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.305904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.305916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.306084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.306096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.306275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.306288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.306467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.306479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.306648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.306660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.306907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.306921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.307082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.307095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.307294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.307307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.307534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.307548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.307702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.307713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.307995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.308007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.308180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.308192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.308367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.308379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.308561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.308574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.389 [2024-07-15 11:55:59.308836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.389 [2024-07-15 11:55:59.308849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.389 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.309021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.309034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.309198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.309210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.309387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.309399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.309502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.309514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.309692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.309704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.309863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.309875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.310047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.310059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.310223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.310235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.310394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.310406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.310643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.310655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.310948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.310960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.311061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.311074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.311310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.311322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.311415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.311428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.311667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.311680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.311848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.311861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.311962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.311974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.312154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.312166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.312393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.312407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.312553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.312565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.312822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.312841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.313013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.313025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.313319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.313331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.313511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.313523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.313692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.313704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.313977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.313990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.314172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.314184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.314410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.314422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.314590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.314602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.314781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.314795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.315043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.315055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.315284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.315297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.315543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.315556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.390 [2024-07-15 11:55:59.315730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.390 [2024-07-15 11:55:59.315742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.390 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.315847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.315860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.316066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.316079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.316183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.316195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.316447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.316459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.316646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.316668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.316908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.316920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.317111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.317123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.317291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.317303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.317526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.317538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.317763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.317775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.318014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.318028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.318186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.318198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.318370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.318382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.318547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.318559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.318718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.318731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.318968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.318980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.319241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.319253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.319416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.319428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.319726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.319738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.319930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.319943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.320128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.320141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.320361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.320373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.320553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.320565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.320819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.320834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.321011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.321023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.321182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.321194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.321420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.321434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.321731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.321743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.321995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.322008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.322167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.322178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.322356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.322368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.322539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.322551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.322658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.322670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.322858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.322870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.322969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.322982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.323207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.323219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.323393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.323406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.323579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.323592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.323817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.323829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.323997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.391 [2024-07-15 11:55:59.324010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.391 qpair failed and we were unable to recover it. 00:29:31.391 [2024-07-15 11:55:59.324255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.324267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.324491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.324503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.324751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.324763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.324954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.324967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.325120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.325132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.325359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.325371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.325557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.325570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.325838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.325850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.326024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.326035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.326201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.326214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.326463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.326475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.326727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.326739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.326965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.326977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.327143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.327155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.327312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.327324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.327620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.327633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.327827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.327843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.328138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.328150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.328343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.328356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.328524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.328536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.328698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.328710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.328893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.328906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.329145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.329158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.329387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.329399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.329516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.329528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.329719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.329731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.329904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.329918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.330215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.330228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.330480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.330492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.330660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.330672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.330843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.330855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.331009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.331021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.331245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.331257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.331480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.331492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.331646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.331659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.331889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.331901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.332012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.332024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.332219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.332231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.332477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.332489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.332666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.332679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.392 [2024-07-15 11:55:59.332859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.392 [2024-07-15 11:55:59.332871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.392 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.333060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.333072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.333340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.333352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.333544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.333556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.333728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.333740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.333982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.333994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.334150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.334162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.334321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.334333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.334658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.334671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.334907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.334920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.335008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.335020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.335261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.335274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.335506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.335518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.335742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.335755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.335927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.335940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.336108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.336120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.336343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.336356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.336599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.336612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.336839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.336851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.337105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.337117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.337292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.337304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.337575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.337587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.337839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.337851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.338008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.338020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.338261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.338274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.338387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.338398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.338634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.338648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.338983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.338996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.339187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.339200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.339371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.339383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.339553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.339565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.339666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.339678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.339867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.339880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.340051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.340063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.340320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.340332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.340491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.340504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.340729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.340741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.340839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.340852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.341021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.341034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.341279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.341291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.341520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.393 [2024-07-15 11:55:59.341532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.393 qpair failed and we were unable to recover it. 00:29:31.393 [2024-07-15 11:55:59.341724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.341736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.341914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.341926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.342170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.342182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.342413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.342425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.342669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.342681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.342948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.342968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.343149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.343162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.343336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.343348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.343591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.343603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.343898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.343910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.344086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.344098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.344275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.344287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.344457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.344470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.344709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.344722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.344955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.344969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.345221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.345234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.345392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.345404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.345699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.345710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.345889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.345902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.346069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.346081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.346375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.346388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.346502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.346515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.346755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.346767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.347003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.347015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.347243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.347256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.347429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.347443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.347705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.347717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.347894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.347907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.348093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.348106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.348280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.348292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.348544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.348557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.348792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.348804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.348985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.348997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.349292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.349304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.349485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.349497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.349607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.394 [2024-07-15 11:55:59.349619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.394 qpair failed and we were unable to recover it. 00:29:31.394 [2024-07-15 11:55:59.349879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.349892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.350046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.350058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.350175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.350187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.350367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.350379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.350648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.350661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.350889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.350902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.351141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.351154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.351336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.351349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.351502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.351514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.351744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.351757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.352009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.352022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.352220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.352233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.352497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.352509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.352680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.352692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.352870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.352882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.353135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.353147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.353379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.353391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.353547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.353559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.353731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.353744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.354015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.354028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.354259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.354272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.354535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.354548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.354636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.354648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.354911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.354924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.355165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.355178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.355268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.355281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.355464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.355477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.355704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.355717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.355942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.355954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.356190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.356205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.356427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.356439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.356605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.356619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.356775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.356787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.357129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.357141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.357462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.357474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.357663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.357675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.357864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.357877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.358034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.395 [2024-07-15 11:55:59.358046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.395 qpair failed and we were unable to recover it. 00:29:31.395 [2024-07-15 11:55:59.358274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.358286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.358513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.358526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.358819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.358844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.359001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.359013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.359262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.359275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.359505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.359517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.359733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.359745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.360055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.360068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.360228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.360240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.360398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.360410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.360648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.360661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.360840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.360852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.361067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.361080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.361263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.361275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.361378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.361390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.361626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.361638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.361950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.361963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.362189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.362201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.362378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.362391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.362552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.362564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.362735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.362747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.362976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.362988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.363218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.363231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.363390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.363402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.363579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.363591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.363818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.363830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.364152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.364164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.364435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.364447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.364743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.364756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.365042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.365054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.365352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.365364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.365526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.365540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.365719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.365731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.365989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.366001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.366238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.366250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.366439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.366452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.366724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.366738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.366922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.366935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.367106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.367118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.367362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.367374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.396 qpair failed and we were unable to recover it. 00:29:31.396 [2024-07-15 11:55:59.367609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.396 [2024-07-15 11:55:59.367621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.367797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.367809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.368114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.368126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.368421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.368433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.368671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.368684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.368921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.368934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.369105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.369118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.369348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.369360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.369539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.369552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.369731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.369744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.369908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.369920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.370145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.370158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.370318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.370331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.370489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.370501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.370689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.370702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.370888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.370901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.371166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.371179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.371375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.371387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.371631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.371644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.371893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.371906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.372079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.372091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.372327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.372340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.372562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.372574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.372751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.372763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.373107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.373120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.373307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.373321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.373620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.373633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.373805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.373817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.374008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.374020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.374113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.374125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.374348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.374363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.374602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.374616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.374772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.374785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.375057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.375070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.375366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.375379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.375558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.375571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.375727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.375740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.375855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.375868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.376045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.376057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.376230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.376242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.376408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.397 [2024-07-15 11:55:59.376420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.397 qpair failed and we were unable to recover it. 00:29:31.397 [2024-07-15 11:55:59.376590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.376602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.376831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.376847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.377076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.377088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.377341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.377353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.377594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.377607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.377775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.377787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.378021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.378034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.378197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.378209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.378448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.378461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.378780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.378792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.378955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.378968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.379207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.379219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.379386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.379398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.379555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.379568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.379745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.379757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.379925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.379938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.380097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.380109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.380347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.380359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.380514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.380526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.380778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.380792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.381026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.381038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.381214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.381227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.381472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.381485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.381591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.381603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.381876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.381889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.382160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.382173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.382348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.382360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.382549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.382561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.382675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.382687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.382861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.382873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.383089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.383103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.383337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.383350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.383595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.383608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.383861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.383874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.384048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.384061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.384297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.384310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.384488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.384501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.384753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.384766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.384949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.384961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.385213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.385226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.398 [2024-07-15 11:55:59.385397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.398 [2024-07-15 11:55:59.385410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.398 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.385718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.385731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.385958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.385970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.386150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.386163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.386354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.386366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.386549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.386562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.386877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.386890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.387118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.387131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.387357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.387369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.387604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.387616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.387794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.387807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.387978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.387991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.388158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.388171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.388339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.388352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.388576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.388588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.388756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.388769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.388995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.389008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.389180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.389192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.389370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.389382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.389548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.389560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.389722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.389734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.389891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.389904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.390111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.390124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.390362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.390375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.390718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.390731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.390914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.390927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.391132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.391145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.391318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.391330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.391493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.391506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.391822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.391838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.392048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.392063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.392217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.392230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.392470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.392483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.392645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.392659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.399 qpair failed and we were unable to recover it. 00:29:31.399 [2024-07-15 11:55:59.392912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.399 [2024-07-15 11:55:59.392924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.393218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.393231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.393404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.393417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.393499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.393512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.393679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.393692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.393870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.393883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.394119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.394132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.394377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.394390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.394707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.394720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.394916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.394929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.395171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.395183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.395359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.395373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.395542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.395555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.395727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.395739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.395966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.395979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.396167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.396180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.396471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.396484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.396642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.396654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.396820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.396835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.397003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.397016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.397194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.397207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.397428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.397441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.397705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.397717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.397885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.397898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.398167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.398180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.398346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.398358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.398525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.398538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.398765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.398777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.399072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.399085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.399251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.399264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.399493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.399506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.399735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.399748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.399926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.399939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.400195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.400208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.400438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.400450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.400684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.400696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.400850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.400864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.401146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.401159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.401349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.401362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.401654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.401666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.401929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.400 [2024-07-15 11:55:59.401942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.400 qpair failed and we were unable to recover it. 00:29:31.400 [2024-07-15 11:55:59.402176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.402188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.402368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.402380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.402630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.402642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.402810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.402823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.402937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.402949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.403175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.403188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.403375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.403387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.403475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.403488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.403727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.403740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.403967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.403980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.404179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.404191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.404491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.404504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.404821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.404837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.405025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.405037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.405214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.405227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.405473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.405487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.405804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.405816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.405978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.405991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.406174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.406187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.406377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.406389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.406479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.406491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.406708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.406721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.407021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.407035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.407282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.407295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.407402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.407414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.407640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.407652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.407969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.407982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.408280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.408293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.408393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.408405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.408650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.408663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.408910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.408924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.409151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.409163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.409403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.409416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.409663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.409676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.409990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.410002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.410248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.410262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.410433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.410446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.410542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.410554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.410798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.410810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.411140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.411154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.401 qpair failed and we were unable to recover it. 00:29:31.401 [2024-07-15 11:55:59.411308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.401 [2024-07-15 11:55:59.411321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.411571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.411584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.411828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.411847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.412112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.412125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.412374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.412387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.412616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.412629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.412920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.412934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.413164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.413176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.413346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.413360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.413610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.413622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.413871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.413884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.414073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.414086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.414320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.414333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.414566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.414578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.414817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.414830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.415109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.415121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.415360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.415373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.415616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.415628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.415867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.415880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.416060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.416072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.416345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.416358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.416598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.416610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.416777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.416789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.417025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.417038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.417273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.417285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.417460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.417473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.417648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.417661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.417919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.417932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.418171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.418183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.418504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.418517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.418812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.418825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.418933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.418945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.419133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.419146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.419406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.419418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.419592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.419604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.419781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.419795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.419952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.419965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.420377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.420390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.420545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.420557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.420797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.420809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.421156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.402 [2024-07-15 11:55:59.421169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.402 qpair failed and we were unable to recover it. 00:29:31.402 [2024-07-15 11:55:59.421333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.421345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.421587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.421599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.421845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.421858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.422046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.422059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.422376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.422388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.422710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.422723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.422969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.422982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.423204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.423216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.423420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.423433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.423674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.423686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.423804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.423817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.424138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.424151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.424330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.424343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.424579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.424591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.424780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.424792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.424972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.424986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.425222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.425234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.425484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.425496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.425748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.425762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.425993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.426007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.426178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.426190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.426507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.426523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.426764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.426776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.427018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.427031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.427193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.427206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.427377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.427390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.427500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.427512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.427826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.427844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.428108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.428120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.428461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.428474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.428769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.428782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.428958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.428972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.429145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.429158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.403 [2024-07-15 11:55:59.429361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.403 [2024-07-15 11:55:59.429374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.403 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.429553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.429565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.429861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.429875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.430065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.430078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.430251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.430263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.430493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.430506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.430677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.430690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.430871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.430884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.431058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.431071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.431256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.431268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.431508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.431521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.431697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.431710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.431950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.431964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.432262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.432275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.432500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.432512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.432618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.432630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.432786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.432799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.432910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.432923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.433137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.433151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.433360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.433373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.433619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.433631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.433724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.433737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.433923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.433936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.434122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.434134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.434337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.434350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.434577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.434590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.434905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.434918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.435146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.435159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.435315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.435329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.435573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.404 [2024-07-15 11:55:59.435585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.404 qpair failed and we were unable to recover it. 00:29:31.404 [2024-07-15 11:55:59.435754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.435767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.435946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.435960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.436134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.436147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.436403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.436416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.436661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.436673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.436934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.436947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.437252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.437265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.437370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.437382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.437558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.437571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.437799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.437812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.437972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.437986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.438166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.438179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.438343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.438357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.438596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.438608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.438787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.438800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.438982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.438996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.439088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.439100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.439253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.439265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.439516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.439529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.439756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.439769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.440011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.440024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.440123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.440136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.440294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.440307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.440531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.440544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.440727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.440740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.440985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.440997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.441263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.441275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.441522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.441534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.441768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.441781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.442050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.442062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.442237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.442249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.442403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.442416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.442733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.442745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.442924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.442937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.443110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.443122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.443351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.443363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.443626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.443639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.443816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.443828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.444013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.444027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.405 [2024-07-15 11:55:59.444130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.405 [2024-07-15 11:55:59.444142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.405 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.444307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.444319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.444518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.444530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.444711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.444723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.445042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.445055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.445231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.445244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.445401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.445414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.445520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.445533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.445698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.445709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.445897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.445910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.446125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.446137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.446304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.446316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.446491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.446503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.446678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.446690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.446851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.446864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.446959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.446971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.447131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.447143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.447295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.447307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.447498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.447510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.447759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.447771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.447966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.447979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.448140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.448152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.448310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.448322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.448584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.448596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.448798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.448810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.449041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.449053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.449288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.449300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.449534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.449547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.449726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.449739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.449967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.449980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.450152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.450165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.450386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.450399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.450557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.450569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.450809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.450821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.451059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.451072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.451168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.451180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.451377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.451389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.451620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.451632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.451790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.451802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.452037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.452052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.452390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.452402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.452569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.452580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.406 [2024-07-15 11:55:59.452747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.406 [2024-07-15 11:55:59.452760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.406 qpair failed and we were unable to recover it. 00:29:31.407 [2024-07-15 11:55:59.452983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.407 [2024-07-15 11:55:59.452996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.407 qpair failed and we were unable to recover it. 00:29:31.407 [2024-07-15 11:55:59.453171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.407 [2024-07-15 11:55:59.453183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.407 qpair failed and we were unable to recover it. 00:29:31.407 [2024-07-15 11:55:59.453412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.407 [2024-07-15 11:55:59.453424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.407 qpair failed and we were unable to recover it. 00:29:31.407 [2024-07-15 11:55:59.453582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.407 [2024-07-15 11:55:59.453594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.407 qpair failed and we were unable to recover it. 00:29:31.407 [2024-07-15 11:55:59.453945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.407 [2024-07-15 11:55:59.453957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.407 qpair failed and we were unable to recover it. 00:29:31.407 [2024-07-15 11:55:59.454121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.407 [2024-07-15 11:55:59.454133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.407 qpair failed and we were unable to recover it. 00:29:31.407 [2024-07-15 11:55:59.454293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.407 [2024-07-15 11:55:59.454305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.407 qpair failed and we were unable to recover it. 00:29:31.407 [2024-07-15 11:55:59.454555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.407 [2024-07-15 11:55:59.454567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.407 qpair failed and we were unable to recover it. 00:29:31.407 [2024-07-15 11:55:59.454752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.407 [2024-07-15 11:55:59.454764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.407 qpair failed and we were unable to recover it. 00:29:31.407 [2024-07-15 11:55:59.454934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.407 [2024-07-15 11:55:59.454946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.407 qpair failed and we were unable to recover it. 00:29:31.407 [2024-07-15 11:55:59.455152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.407 [2024-07-15 11:55:59.455165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.407 qpair failed and we were unable to recover it. 00:29:31.407 [2024-07-15 11:55:59.455414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.407 [2024-07-15 11:55:59.455426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.407 qpair failed and we were unable to recover it. 00:29:31.407 [2024-07-15 11:55:59.455660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.407 [2024-07-15 11:55:59.455672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.407 qpair failed and we were unable to recover it. 00:29:31.407 [2024-07-15 11:55:59.455911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.407 [2024-07-15 11:55:59.455924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.407 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.456097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.456109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.456283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.456295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.456466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.456477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.456645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.456658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.456888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.456900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.457131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.457143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.457317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.457329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.457569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.457581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.457810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.457822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.458009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.458021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.458267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.458279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.458458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.458470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.458643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.458656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.458904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.458916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.459160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.459172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.459331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.459343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.459498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.459510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.459678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.459690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.459939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.459952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.460177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.460189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.460360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.460372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.460605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.460617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.460794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.460807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.460919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.460932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.461168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.461180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.461410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.461422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.461599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.461610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.461941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.461954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.462180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.462192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.462355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.462367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.462591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.462604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.462776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.462788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.463022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.463034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.463268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.463280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.463463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.463475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.463658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.463670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.463900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.463913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.464083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.464095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.464282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.676 [2024-07-15 11:55:59.464294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.676 qpair failed and we were unable to recover it. 00:29:31.676 [2024-07-15 11:55:59.464534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.464546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.464717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.464729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.464888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.464900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.465158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.465170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.465420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.465432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.465664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.465676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.465850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.465862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.466032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.466044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.466203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.466215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.466456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.466468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.466769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.466782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.466944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.466956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.467202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.467215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.467477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.467489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.467708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.467720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.467880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.467892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.468157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.468169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.468406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.468418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.468664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.468676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.468841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.468853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.469086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.469098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.469327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.469340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.469582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.469594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.469774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.469787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.469967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.469979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.470149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.470161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.470391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.470403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.470655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.470667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.470983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.470995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.471176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.471188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.471427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.471439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.471600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.471612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.471789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.471801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.472104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.472116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.472282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.472294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.677 qpair failed and we were unable to recover it. 00:29:31.677 [2024-07-15 11:55:59.472468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.677 [2024-07-15 11:55:59.472480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.472705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.472718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.472950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.472963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.473120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.473132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.473313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.473325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.473551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.473564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.473822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.473846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.474099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.474112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:31.678 [2024-07-15 11:55:59.474279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.474298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.474539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.474552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:29:31.678 [2024-07-15 11:55:59.474724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.474737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.474922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.474935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:31.678 [2024-07-15 11:55:59.475107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.475120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:31.678 [2024-07-15 11:55:59.475347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.475364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.678 [2024-07-15 11:55:59.475659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.475672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.475917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.475929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.476091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.476103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.476260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.476272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.476440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.476452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.476608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.476620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.476799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.476812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.477059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.477072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.477237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.477249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.477520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.477533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.477762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.477774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.477964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.477976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.478134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.478150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.478381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.478394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.478572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.478584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.478743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.478755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.478916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.478931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.479111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.479124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.479307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.479319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.479485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.479497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.479798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.479811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.480005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.480018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.480178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.480191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.480360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.480372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.678 [2024-07-15 11:55:59.480487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.678 [2024-07-15 11:55:59.480500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.678 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.480721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.480734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.480910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.480923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.481213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.481226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.481463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.481475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.481791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.481804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.481978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.481991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.482233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.482246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.482492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.482505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.482683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.482696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.482945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.482957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.483229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.483242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.483480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.483492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.483659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.483671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.483910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.483924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.484174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.484186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.484424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.484437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.484605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.484618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.484859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.484871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.485039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.485051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.485360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.485373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.485613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.485626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.485802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.485814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.486000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.486012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.486188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.486200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.486443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.486455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.486708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.486720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.486906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.486919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.487146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.487160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.487322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.487336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.487582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.487595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.487843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.487856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.487982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.487995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.488218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.488230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.488407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.488420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.488599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.488613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.488783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.488798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.488962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.488975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.489199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.679 [2024-07-15 11:55:59.489214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.679 qpair failed and we were unable to recover it. 00:29:31.679 [2024-07-15 11:55:59.489428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.489441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.489601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.489614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.489787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.489800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.490031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.490044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.490274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.490287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.490527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.490539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.490785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.490798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.490976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.490988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.491229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.491241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.491420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.491432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.491671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.491684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.491855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.491867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.492051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.492064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.492304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.492316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.492500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.492512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.492737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.492749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.492962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.492996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.493198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.493216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.493392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.493409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.493595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.493609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.493882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.493894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.494203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.494216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.494392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.494404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.494580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.494592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.494749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.494761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.494934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.494946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.495136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.495148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.495307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.495320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.495485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.495497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.495663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.495678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.680 qpair failed and we were unable to recover it. 00:29:31.680 [2024-07-15 11:55:59.495849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.680 [2024-07-15 11:55:59.495862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.496111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.496123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.496219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.496231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.496416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.496428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.496600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.496612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.496800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.496813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.496952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.496965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.497203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.497215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.497423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.497435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.497623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.497636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.497873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.497886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.498058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.498071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.498291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.498303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.498552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.498567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.498729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.498741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.498932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.498945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.499195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.499207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.499452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.499464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.499630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.499642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.499819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.499836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.500019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.500032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.500275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.500287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.500452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.500465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.500643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.500656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.500817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.500829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.500941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.500953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.501127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.501139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.501362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.501374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.501548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.501561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.501722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.501734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.501901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.501913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.502035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.502047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.502201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.502214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.502377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.502390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.502566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.502578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.502836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.502849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.503028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.503041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.503222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.503234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.503398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.503410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.503650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.503665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.503853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.681 [2024-07-15 11:55:59.503865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.681 qpair failed and we were unable to recover it. 00:29:31.681 [2024-07-15 11:55:59.503997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.504010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.504169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.504182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.504416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.504427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.504598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.504611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.504838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.504851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.505013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.505026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.505255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.505269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.505481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.505494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.505674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.505686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.505899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.505912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.506082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.506096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.506271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.506284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.506468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.506480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.506727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.506739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.506976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.506989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.507152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.507165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.507338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.507350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.507579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.507592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.507748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.507761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.507936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.507948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.508137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.508149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.508321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.508333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.508489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.508500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.508671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.508683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.508914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.508927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.509096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.509109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.509265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.509277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.509441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.509453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.509542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.509554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.509724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.509737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.509970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.509983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.510224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.510237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.510403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.510415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.510581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.510593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.510837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.510849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.511023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.511035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.511261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.511273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.511492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.511505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.511748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.511760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.511943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.511956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.682 [2024-07-15 11:55:59.512122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.682 [2024-07-15 11:55:59.512135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.682 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.512330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.512342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.512605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.512618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.512785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.512797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.512963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.512978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.513243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.513256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.513447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.513459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.513631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.513643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.513808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.513821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.513998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.514013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.514252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.514265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.514427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.514440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.514739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.514752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.514878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.514891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.515137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.515150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.515378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.515390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.515580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.515592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.515823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.515838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.516106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.516120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.516277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.516290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.516453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.516466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.516627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.516640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.516880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.516892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.517063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.517075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.517286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.517298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.517454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.517470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.517656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.517669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.517969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.517981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.518154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.518166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.518343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.518356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.518511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.518523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.518689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.518701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.518887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.518899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.519054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.519066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.683 [2024-07-15 11:55:59.519298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.519312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.519552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.519565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:31.683 [2024-07-15 11:55:59.519763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.519776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.519950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.519969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.683 [2024-07-15 11:55:59.520208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.520222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.683 [2024-07-15 11:55:59.520448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.683 [2024-07-15 11:55:59.520461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.683 qpair failed and we were unable to recover it. 00:29:31.683 [2024-07-15 11:55:59.520705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.520717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.521038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.521052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.521227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.521239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.521396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.521409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.521651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.521664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.521852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.521864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.522038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.522051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.522285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.522298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.522532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.522544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.522702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.522714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.522967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.522980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.523149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.523161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.523475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.523486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.523724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.523736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.523902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.523914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.524102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.524114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.524342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.524354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.524593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.524605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.524839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.524851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.525014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.525026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.525190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.525202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.525469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.525480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.525787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.525799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.525894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.525908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.526155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.526167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.526341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.526353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.526589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.526601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.526761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.526773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.527002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.527015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.527239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.527252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.527495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.527507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.527738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.527751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.527974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.527986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.528207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.528220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.684 [2024-07-15 11:55:59.528380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.684 [2024-07-15 11:55:59.528393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.684 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.528561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.528573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.528800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.528813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.529058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.529071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.529235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.529248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.529414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.529426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.529651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.529664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.529892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.529905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.530127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.530140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.530299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.530313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.530490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.530503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.530676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.530689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.530862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.530876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.531105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.531117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.531344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.531356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.531520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.531534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.531725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.531759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.532009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.532030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.532272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.532289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.532526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.532542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.532800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.532817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.533008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.533025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1274000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.533374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.533387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.533615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.533627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.533791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.533803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.533962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.533974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.534270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.534283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.534438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.534451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.534699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.534712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.534885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.534900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.535195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.535207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.535364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.535376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.535603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.535615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.535773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.535786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.535963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.535975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.536269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.536281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.536450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.536463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.536638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.536650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.536877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.536890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.537154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.537166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 Malloc0 00:29:31.685 [2024-07-15 11:55:59.537460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.537472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.685 qpair failed and we were unable to recover it. 00:29:31.685 [2024-07-15 11:55:59.537635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.685 [2024-07-15 11:55:59.537647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.537886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.537899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.686 [2024-07-15 11:55:59.538059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.538072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.538249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.538262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:31.686 [2024-07-15 11:55:59.538415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.538428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.686 [2024-07-15 11:55:59.538743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.538755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.686 [2024-07-15 11:55:59.538990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.539003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.539231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.539244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.539469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.539481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.539705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.539717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.539890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.539902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.540163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.540175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.540346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.540358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.540528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.540542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.540700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.540712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.540940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.540952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.541176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.541189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.541415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.541427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.541755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.541767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.541968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.541981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.542138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.542150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.542337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.542349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.542590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.542603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.542829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.542844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.543069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.543081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.543252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.543264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.543490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.543502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.543683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.543695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.543957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.543970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.544197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.544209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.544503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.544514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.544679] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.686 [2024-07-15 11:55:59.544760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.544772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.544956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.544968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.545195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.545207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.545393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.545405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.545670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.545682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.545842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.545854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.546115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.546128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.546430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.546442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.686 [2024-07-15 11:55:59.546667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.686 [2024-07-15 11:55:59.546679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.686 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.546923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.546936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.547189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.547201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.547515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.547527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.547713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.547725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.547976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.547989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.548226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.548238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.548550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.548562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.548788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.548800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.549022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.549034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.549327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.549338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.549588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.549600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.549861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.549873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.550194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.550206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.550498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.550512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.550672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.550684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.550922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.550945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.551119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.551131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.551365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.551376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.551552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.551565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.551806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.551819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.552070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.552092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.552422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.552439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.552705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.552721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.552970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.552988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.553252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.553269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.687 [2024-07-15 11:55:59.553433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.553455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.553727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.553748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:31.687 [2024-07-15 11:55:59.554079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.687 [2024-07-15 11:55:59.554097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.554204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.554220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.687 [2024-07-15 11:55:59.554526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.554542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.554848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.554865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.555055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.555071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x91d210 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.555396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.555409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.555597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.555610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.555924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.555937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.556208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.556221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.556532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.556544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.556842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.556854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.557127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.687 [2024-07-15 11:55:59.557141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.687 qpair failed and we were unable to recover it. 00:29:31.687 [2024-07-15 11:55:59.557312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.557325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.557597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.557609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.557852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.557865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.558050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.558063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.558328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.558340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.558598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.558610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.558864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.558877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.559117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.559129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.559370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.559382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.559621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.559634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.559868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.559880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.560148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.560160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.560429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.560441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.560776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.560788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.560978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.560990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.561220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.561232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.561344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.561357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.688 [2024-07-15 11:55:59.561518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.561531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.561699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.561712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:31.688 [2024-07-15 11:55:59.562007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.562020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.688 [2024-07-15 11:55:59.562193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.562205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.688 [2024-07-15 11:55:59.562432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.562445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.562605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.562617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.562875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.562887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.563065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.563079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.563357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.563369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.563542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.563554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.563848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.563861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.564112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.564124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.564278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.564290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.564539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.564551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.564725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.564737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.564985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.564998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.565296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.565308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.565474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.565486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.565824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.565840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.566103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.566115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.566453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.566465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.566785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.688 [2024-07-15 11:55:59.566797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.688 qpair failed and we were unable to recover it. 00:29:31.688 [2024-07-15 11:55:59.567093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.567105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 [2024-07-15 11:55:59.567330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.567342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 [2024-07-15 11:55:59.567639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.567651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 [2024-07-15 11:55:59.567847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.567859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 [2024-07-15 11:55:59.568180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.568192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 [2024-07-15 11:55:59.568419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.568431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 [2024-07-15 11:55:59.568677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.568689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 [2024-07-15 11:55:59.568956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.568969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 [2024-07-15 11:55:59.569158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.569170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 [2024-07-15 11:55:59.569470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.689 [2024-07-15 11:55:59.569482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 [2024-07-15 11:55:59.569726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.569738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:31.689 [2024-07-15 11:55:59.570062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.570075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.689 [2024-07-15 11:55:59.570262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.570275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.689 [2024-07-15 11:55:59.570591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.570603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 [2024-07-15 11:55:59.570827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.570841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 [2024-07-15 11:55:59.571166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.571178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 [2024-07-15 11:55:59.571349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.571361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 [2024-07-15 11:55:59.571586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.571598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 [2024-07-15 11:55:59.571862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.571874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 [2024-07-15 11:55:59.572053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.572065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 [2024-07-15 11:55:59.572305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.572317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 [2024-07-15 11:55:59.572500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.572513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 [2024-07-15 11:55:59.572687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.572699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 [2024-07-15 11:55:59.572919] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.689 [2024-07-15 11:55:59.572927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.689 [2024-07-15 11:55:59.572943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f127c000b90 with addr=10.0.0.2, port=4420 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 [2024-07-15 11:55:59.575285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.689 [2024-07-15 11:55:59.575386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.689 [2024-07-15 11:55:59.575407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.689 [2024-07-15 11:55:59.575418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.689 [2024-07-15 11:55:59.575427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.689 [2024-07-15 11:55:59.575450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.689 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:31.689 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.689 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.689 [2024-07-15 11:55:59.585231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.689 [2024-07-15 11:55:59.585323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.689 [2024-07-15 11:55:59.585343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.689 [2024-07-15 11:55:59.585354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.689 [2024-07-15 11:55:59.585363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.689 [2024-07-15 11:55:59.585384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.689 qpair failed and we were unable to recover it. 00:29:31.689 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.689 11:55:59 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2137487 00:29:31.689 [2024-07-15 11:55:59.595227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.690 [2024-07-15 11:55:59.595310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.690 [2024-07-15 11:55:59.595329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.690 [2024-07-15 11:55:59.595339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.690 [2024-07-15 11:55:59.595348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.690 [2024-07-15 11:55:59.595367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.690 qpair failed and we were unable to recover it. 00:29:31.690 [2024-07-15 11:55:59.605286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.690 [2024-07-15 11:55:59.605379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.690 [2024-07-15 11:55:59.605400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.690 [2024-07-15 11:55:59.605409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.690 [2024-07-15 11:55:59.605418] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.690 [2024-07-15 11:55:59.605437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.690 qpair failed and we were unable to recover it. 00:29:31.690 [2024-07-15 11:55:59.615240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.690 [2024-07-15 11:55:59.615326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.690 [2024-07-15 11:55:59.615347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.690 [2024-07-15 11:55:59.615358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.690 [2024-07-15 11:55:59.615368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.690 [2024-07-15 11:55:59.615387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.690 qpair failed and we were unable to recover it. 00:29:31.690 [2024-07-15 11:55:59.625256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.690 [2024-07-15 11:55:59.625339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.690 [2024-07-15 11:55:59.625357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.690 [2024-07-15 11:55:59.625367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.690 [2024-07-15 11:55:59.625376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.690 [2024-07-15 11:55:59.625394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.690 qpair failed and we were unable to recover it. 00:29:31.690 [2024-07-15 11:55:59.635274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.690 [2024-07-15 11:55:59.635404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.690 [2024-07-15 11:55:59.635422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.690 [2024-07-15 11:55:59.635432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.690 [2024-07-15 11:55:59.635442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.690 [2024-07-15 11:55:59.635461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.690 qpair failed and we were unable to recover it. 00:29:31.690 [2024-07-15 11:55:59.645315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.690 [2024-07-15 11:55:59.645445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.690 [2024-07-15 11:55:59.645464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.690 [2024-07-15 11:55:59.645474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.690 [2024-07-15 11:55:59.645483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.690 [2024-07-15 11:55:59.645504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.690 qpair failed and we were unable to recover it. 00:29:31.690 [2024-07-15 11:55:59.655260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.690 [2024-07-15 11:55:59.655349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.690 [2024-07-15 11:55:59.655367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.690 [2024-07-15 11:55:59.655376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.690 [2024-07-15 11:55:59.655385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.690 [2024-07-15 11:55:59.655403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.690 qpair failed and we were unable to recover it. 00:29:31.690 [2024-07-15 11:55:59.665289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.690 [2024-07-15 11:55:59.665367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.690 [2024-07-15 11:55:59.665386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.690 [2024-07-15 11:55:59.665396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.690 [2024-07-15 11:55:59.665405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.690 [2024-07-15 11:55:59.665424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.690 qpair failed and we were unable to recover it. 00:29:31.690 [2024-07-15 11:55:59.675367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.690 [2024-07-15 11:55:59.675446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.690 [2024-07-15 11:55:59.675464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.690 [2024-07-15 11:55:59.675474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.690 [2024-07-15 11:55:59.675483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.690 [2024-07-15 11:55:59.675501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.690 qpair failed and we were unable to recover it. 00:29:31.690 [2024-07-15 11:55:59.685378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.690 [2024-07-15 11:55:59.685460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.690 [2024-07-15 11:55:59.685478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.690 [2024-07-15 11:55:59.685487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.690 [2024-07-15 11:55:59.685496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.690 [2024-07-15 11:55:59.685514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.690 qpair failed and we were unable to recover it. 00:29:31.690 [2024-07-15 11:55:59.695458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.690 [2024-07-15 11:55:59.695549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.690 [2024-07-15 11:55:59.695569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.690 [2024-07-15 11:55:59.695580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.690 [2024-07-15 11:55:59.695589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.690 [2024-07-15 11:55:59.695608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.690 qpair failed and we were unable to recover it. 00:29:31.690 [2024-07-15 11:55:59.705416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.690 [2024-07-15 11:55:59.705495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.690 [2024-07-15 11:55:59.705514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.690 [2024-07-15 11:55:59.705524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.690 [2024-07-15 11:55:59.705532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.690 [2024-07-15 11:55:59.705551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.690 qpair failed and we were unable to recover it. 00:29:31.690 [2024-07-15 11:55:59.715556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.690 [2024-07-15 11:55:59.715655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.690 [2024-07-15 11:55:59.715673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.690 [2024-07-15 11:55:59.715683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.690 [2024-07-15 11:55:59.715693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.690 [2024-07-15 11:55:59.715711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.690 qpair failed and we were unable to recover it. 00:29:31.690 [2024-07-15 11:55:59.725493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.690 [2024-07-15 11:55:59.725576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.690 [2024-07-15 11:55:59.725594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.690 [2024-07-15 11:55:59.725604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.690 [2024-07-15 11:55:59.725613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.690 [2024-07-15 11:55:59.725631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.690 qpair failed and we were unable to recover it. 00:29:31.690 [2024-07-15 11:55:59.735554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.690 [2024-07-15 11:55:59.735632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.690 [2024-07-15 11:55:59.735650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.690 [2024-07-15 11:55:59.735660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.690 [2024-07-15 11:55:59.735672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.691 [2024-07-15 11:55:59.735690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.691 qpair failed and we were unable to recover it. 00:29:31.691 [2024-07-15 11:55:59.745600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.691 [2024-07-15 11:55:59.745683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.691 [2024-07-15 11:55:59.745701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.691 [2024-07-15 11:55:59.745710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.691 [2024-07-15 11:55:59.745719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.691 [2024-07-15 11:55:59.745737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.691 qpair failed and we were unable to recover it. 00:29:31.691 [2024-07-15 11:55:59.755620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.691 [2024-07-15 11:55:59.755705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.691 [2024-07-15 11:55:59.755722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.691 [2024-07-15 11:55:59.755732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.691 [2024-07-15 11:55:59.755741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.691 [2024-07-15 11:55:59.755759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.691 qpair failed and we were unable to recover it. 00:29:31.691 [2024-07-15 11:55:59.765667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.691 [2024-07-15 11:55:59.765753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.691 [2024-07-15 11:55:59.765771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.691 [2024-07-15 11:55:59.765781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.691 [2024-07-15 11:55:59.765789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.691 [2024-07-15 11:55:59.765808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.691 qpair failed and we were unable to recover it. 00:29:31.951 [2024-07-15 11:55:59.775689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.951 [2024-07-15 11:55:59.775772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.951 [2024-07-15 11:55:59.775790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.951 [2024-07-15 11:55:59.775800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.951 [2024-07-15 11:55:59.775808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.951 [2024-07-15 11:55:59.775827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.951 qpair failed and we were unable to recover it. 00:29:31.951 [2024-07-15 11:55:59.785651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.951 [2024-07-15 11:55:59.785818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.951 [2024-07-15 11:55:59.785841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.952 [2024-07-15 11:55:59.785852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.952 [2024-07-15 11:55:59.785861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.952 [2024-07-15 11:55:59.785881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.952 qpair failed and we were unable to recover it. 00:29:31.952 [2024-07-15 11:55:59.795748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.952 [2024-07-15 11:55:59.795835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.952 [2024-07-15 11:55:59.795853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.952 [2024-07-15 11:55:59.795863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.952 [2024-07-15 11:55:59.795872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.952 [2024-07-15 11:55:59.795890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.952 qpair failed and we were unable to recover it. 00:29:31.952 [2024-07-15 11:55:59.805741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.952 [2024-07-15 11:55:59.805828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.952 [2024-07-15 11:55:59.805849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.952 [2024-07-15 11:55:59.805859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.952 [2024-07-15 11:55:59.805868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.952 [2024-07-15 11:55:59.805887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.952 qpair failed and we were unable to recover it. 00:29:31.952 [2024-07-15 11:55:59.815792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.952 [2024-07-15 11:55:59.815875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.952 [2024-07-15 11:55:59.815893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.952 [2024-07-15 11:55:59.815903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.952 [2024-07-15 11:55:59.815911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.952 [2024-07-15 11:55:59.815929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.952 qpair failed and we were unable to recover it. 00:29:31.952 [2024-07-15 11:55:59.825828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.952 [2024-07-15 11:55:59.825918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.952 [2024-07-15 11:55:59.825935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.952 [2024-07-15 11:55:59.825945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.952 [2024-07-15 11:55:59.825957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.952 [2024-07-15 11:55:59.825975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.952 qpair failed and we were unable to recover it. 00:29:31.952 [2024-07-15 11:55:59.836015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.952 [2024-07-15 11:55:59.836102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.952 [2024-07-15 11:55:59.836120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.952 [2024-07-15 11:55:59.836130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.952 [2024-07-15 11:55:59.836138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.952 [2024-07-15 11:55:59.836157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.952 qpair failed and we were unable to recover it. 00:29:31.952 [2024-07-15 11:55:59.845909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.952 [2024-07-15 11:55:59.846004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.952 [2024-07-15 11:55:59.846022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.952 [2024-07-15 11:55:59.846032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.952 [2024-07-15 11:55:59.846040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.952 [2024-07-15 11:55:59.846059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.952 qpair failed and we were unable to recover it. 00:29:31.952 [2024-07-15 11:55:59.855983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.952 [2024-07-15 11:55:59.856066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.952 [2024-07-15 11:55:59.856083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.952 [2024-07-15 11:55:59.856093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.952 [2024-07-15 11:55:59.856102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.952 [2024-07-15 11:55:59.856120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.952 qpair failed and we were unable to recover it. 00:29:31.952 [2024-07-15 11:55:59.865975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.952 [2024-07-15 11:55:59.866074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.952 [2024-07-15 11:55:59.866092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.952 [2024-07-15 11:55:59.866102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.952 [2024-07-15 11:55:59.866110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.952 [2024-07-15 11:55:59.866129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.952 qpair failed and we were unable to recover it. 00:29:31.952 [2024-07-15 11:55:59.876054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.952 [2024-07-15 11:55:59.876142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.952 [2024-07-15 11:55:59.876160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.952 [2024-07-15 11:55:59.876170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.952 [2024-07-15 11:55:59.876179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.952 [2024-07-15 11:55:59.876198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.952 qpair failed and we were unable to recover it. 00:29:31.952 [2024-07-15 11:55:59.885979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.952 [2024-07-15 11:55:59.886059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.952 [2024-07-15 11:55:59.886077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.952 [2024-07-15 11:55:59.886087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.952 [2024-07-15 11:55:59.886095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.952 [2024-07-15 11:55:59.886115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.952 qpair failed and we were unable to recover it. 00:29:31.952 [2024-07-15 11:55:59.896055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.952 [2024-07-15 11:55:59.896202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.952 [2024-07-15 11:55:59.896221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.952 [2024-07-15 11:55:59.896230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.952 [2024-07-15 11:55:59.896239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.952 [2024-07-15 11:55:59.896257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.952 qpair failed and we were unable to recover it. 00:29:31.952 [2024-07-15 11:55:59.906047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.952 [2024-07-15 11:55:59.906159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.952 [2024-07-15 11:55:59.906177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.952 [2024-07-15 11:55:59.906187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.952 [2024-07-15 11:55:59.906195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.952 [2024-07-15 11:55:59.906214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.952 qpair failed and we were unable to recover it. 00:29:31.952 [2024-07-15 11:55:59.916016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.952 [2024-07-15 11:55:59.916096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.952 [2024-07-15 11:55:59.916114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.952 [2024-07-15 11:55:59.916127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.952 [2024-07-15 11:55:59.916136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.952 [2024-07-15 11:55:59.916154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.952 qpair failed and we were unable to recover it. 00:29:31.953 [2024-07-15 11:55:59.926077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.953 [2024-07-15 11:55:59.926159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.953 [2024-07-15 11:55:59.926177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.953 [2024-07-15 11:55:59.926186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.953 [2024-07-15 11:55:59.926195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.953 [2024-07-15 11:55:59.926213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.953 qpair failed and we were unable to recover it. 00:29:31.953 [2024-07-15 11:55:59.936131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.953 [2024-07-15 11:55:59.936259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.953 [2024-07-15 11:55:59.936278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.953 [2024-07-15 11:55:59.936287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.953 [2024-07-15 11:55:59.936296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.953 [2024-07-15 11:55:59.936314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.953 qpair failed and we were unable to recover it. 00:29:31.953 [2024-07-15 11:55:59.946055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.953 [2024-07-15 11:55:59.946135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.953 [2024-07-15 11:55:59.946153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.953 [2024-07-15 11:55:59.946162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.953 [2024-07-15 11:55:59.946171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.953 [2024-07-15 11:55:59.946189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.953 qpair failed and we were unable to recover it. 00:29:31.953 [2024-07-15 11:55:59.956161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.953 [2024-07-15 11:55:59.956240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.953 [2024-07-15 11:55:59.956258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.953 [2024-07-15 11:55:59.956268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.953 [2024-07-15 11:55:59.956276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.953 [2024-07-15 11:55:59.956295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.953 qpair failed and we were unable to recover it. 00:29:31.953 [2024-07-15 11:55:59.966180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.953 [2024-07-15 11:55:59.966276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.953 [2024-07-15 11:55:59.966294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.953 [2024-07-15 11:55:59.966303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.953 [2024-07-15 11:55:59.966312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.953 [2024-07-15 11:55:59.966331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.953 qpair failed and we were unable to recover it. 00:29:31.953 [2024-07-15 11:55:59.976237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.953 [2024-07-15 11:55:59.976323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.953 [2024-07-15 11:55:59.976340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.953 [2024-07-15 11:55:59.976350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.953 [2024-07-15 11:55:59.976358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.953 [2024-07-15 11:55:59.976376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.953 qpair failed and we were unable to recover it. 00:29:31.953 [2024-07-15 11:55:59.986234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.953 [2024-07-15 11:55:59.986316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.953 [2024-07-15 11:55:59.986334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.953 [2024-07-15 11:55:59.986344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.953 [2024-07-15 11:55:59.986353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.953 [2024-07-15 11:55:59.986371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.953 qpair failed and we were unable to recover it. 00:29:31.953 [2024-07-15 11:55:59.996286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.953 [2024-07-15 11:55:59.996384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.953 [2024-07-15 11:55:59.996401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.953 [2024-07-15 11:55:59.996411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.953 [2024-07-15 11:55:59.996420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.953 [2024-07-15 11:55:59.996438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.953 qpair failed and we were unable to recover it. 00:29:31.953 [2024-07-15 11:56:00.006294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.953 [2024-07-15 11:56:00.006384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.953 [2024-07-15 11:56:00.006405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.953 [2024-07-15 11:56:00.006414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.953 [2024-07-15 11:56:00.006423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.953 [2024-07-15 11:56:00.006442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.953 qpair failed and we were unable to recover it. 00:29:31.953 [2024-07-15 11:56:00.016339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.953 [2024-07-15 11:56:00.016424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.953 [2024-07-15 11:56:00.016444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.953 [2024-07-15 11:56:00.016454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.953 [2024-07-15 11:56:00.016463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.953 [2024-07-15 11:56:00.016482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.953 qpair failed and we were unable to recover it. 00:29:31.953 [2024-07-15 11:56:00.026395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.953 [2024-07-15 11:56:00.026487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.953 [2024-07-15 11:56:00.026507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.953 [2024-07-15 11:56:00.026517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.953 [2024-07-15 11:56:00.026526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.953 [2024-07-15 11:56:00.026546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.953 qpair failed and we were unable to recover it. 00:29:31.953 [2024-07-15 11:56:00.036422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.953 [2024-07-15 11:56:00.036533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.953 [2024-07-15 11:56:00.036551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.953 [2024-07-15 11:56:00.036561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.953 [2024-07-15 11:56:00.036570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.953 [2024-07-15 11:56:00.036590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.953 qpair failed and we were unable to recover it. 00:29:31.953 [2024-07-15 11:56:00.046440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.953 [2024-07-15 11:56:00.046527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.953 [2024-07-15 11:56:00.046544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.953 [2024-07-15 11:56:00.046554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.953 [2024-07-15 11:56:00.046563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:31.953 [2024-07-15 11:56:00.046586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.953 qpair failed and we were unable to recover it. 00:29:32.215 [2024-07-15 11:56:00.056429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.215 [2024-07-15 11:56:00.056509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.215 [2024-07-15 11:56:00.056527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.215 [2024-07-15 11:56:00.056538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.215 [2024-07-15 11:56:00.056547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.215 [2024-07-15 11:56:00.056565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.215 qpair failed and we were unable to recover it. 00:29:32.215 [2024-07-15 11:56:00.066490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.215 [2024-07-15 11:56:00.066566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.215 [2024-07-15 11:56:00.066584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.215 [2024-07-15 11:56:00.066594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.215 [2024-07-15 11:56:00.066603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.215 [2024-07-15 11:56:00.066622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.215 qpair failed and we were unable to recover it. 00:29:32.215 [2024-07-15 11:56:00.077111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.215 [2024-07-15 11:56:00.077207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.215 [2024-07-15 11:56:00.077228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.215 [2024-07-15 11:56:00.077239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.215 [2024-07-15 11:56:00.077250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.215 [2024-07-15 11:56:00.077272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.215 qpair failed and we were unable to recover it. 00:29:32.215 [2024-07-15 11:56:00.086550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.215 [2024-07-15 11:56:00.086637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.215 [2024-07-15 11:56:00.086656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.215 [2024-07-15 11:56:00.086666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.215 [2024-07-15 11:56:00.086674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.215 [2024-07-15 11:56:00.086693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.215 qpair failed and we were unable to recover it. 00:29:32.215 [2024-07-15 11:56:00.096620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.215 [2024-07-15 11:56:00.096734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.215 [2024-07-15 11:56:00.096755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.215 [2024-07-15 11:56:00.096765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.215 [2024-07-15 11:56:00.096774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.215 [2024-07-15 11:56:00.096793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.215 qpair failed and we were unable to recover it. 00:29:32.215 [2024-07-15 11:56:00.106652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.215 [2024-07-15 11:56:00.106730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.215 [2024-07-15 11:56:00.106747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.215 [2024-07-15 11:56:00.106757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.215 [2024-07-15 11:56:00.106766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.215 [2024-07-15 11:56:00.106784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.215 qpair failed and we were unable to recover it. 00:29:32.215 [2024-07-15 11:56:00.116647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.215 [2024-07-15 11:56:00.116732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.215 [2024-07-15 11:56:00.116750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.215 [2024-07-15 11:56:00.116760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.215 [2024-07-15 11:56:00.116769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.215 [2024-07-15 11:56:00.116787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.215 qpair failed and we were unable to recover it. 00:29:32.215 [2024-07-15 11:56:00.126654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.215 [2024-07-15 11:56:00.126749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.215 [2024-07-15 11:56:00.126766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.215 [2024-07-15 11:56:00.126776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.215 [2024-07-15 11:56:00.126785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.215 [2024-07-15 11:56:00.126804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.215 qpair failed and we were unable to recover it. 00:29:32.215 [2024-07-15 11:56:00.136689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.215 [2024-07-15 11:56:00.136775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.215 [2024-07-15 11:56:00.136792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.215 [2024-07-15 11:56:00.136802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.215 [2024-07-15 11:56:00.136813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.215 [2024-07-15 11:56:00.136837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.215 qpair failed and we were unable to recover it. 00:29:32.215 [2024-07-15 11:56:00.146732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.215 [2024-07-15 11:56:00.146815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.215 [2024-07-15 11:56:00.146836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.215 [2024-07-15 11:56:00.146846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.215 [2024-07-15 11:56:00.146854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.215 [2024-07-15 11:56:00.146873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.215 qpair failed and we were unable to recover it. 00:29:32.215 [2024-07-15 11:56:00.156751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.215 [2024-07-15 11:56:00.156835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.215 [2024-07-15 11:56:00.156853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.215 [2024-07-15 11:56:00.156863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.215 [2024-07-15 11:56:00.156871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.215 [2024-07-15 11:56:00.156890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.215 qpair failed and we were unable to recover it. 00:29:32.215 [2024-07-15 11:56:00.166767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.215 [2024-07-15 11:56:00.166865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.215 [2024-07-15 11:56:00.166883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.215 [2024-07-15 11:56:00.166892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.215 [2024-07-15 11:56:00.166901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.215 [2024-07-15 11:56:00.166920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.215 qpair failed and we were unable to recover it. 00:29:32.215 [2024-07-15 11:56:00.176731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.215 [2024-07-15 11:56:00.176814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.215 [2024-07-15 11:56:00.176835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.215 [2024-07-15 11:56:00.176846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.215 [2024-07-15 11:56:00.176855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.215 [2024-07-15 11:56:00.176873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.215 qpair failed and we were unable to recover it. 00:29:32.216 [2024-07-15 11:56:00.186848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.216 [2024-07-15 11:56:00.186931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.216 [2024-07-15 11:56:00.186949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.216 [2024-07-15 11:56:00.186959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.216 [2024-07-15 11:56:00.186967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.216 [2024-07-15 11:56:00.186986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.216 qpair failed and we were unable to recover it. 00:29:32.216 [2024-07-15 11:56:00.196850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.216 [2024-07-15 11:56:00.196929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.216 [2024-07-15 11:56:00.196946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.216 [2024-07-15 11:56:00.196956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.216 [2024-07-15 11:56:00.196965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.216 [2024-07-15 11:56:00.196983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.216 qpair failed and we were unable to recover it. 00:29:32.216 [2024-07-15 11:56:00.206871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.216 [2024-07-15 11:56:00.206954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.216 [2024-07-15 11:56:00.206972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.216 [2024-07-15 11:56:00.206981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.216 [2024-07-15 11:56:00.206990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.216 [2024-07-15 11:56:00.207008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.216 qpair failed and we were unable to recover it. 00:29:32.216 [2024-07-15 11:56:00.216922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.216 [2024-07-15 11:56:00.217004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.216 [2024-07-15 11:56:00.217022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.216 [2024-07-15 11:56:00.217032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.216 [2024-07-15 11:56:00.217040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.216 [2024-07-15 11:56:00.217059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.216 qpair failed and we were unable to recover it. 00:29:32.216 [2024-07-15 11:56:00.226947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.216 [2024-07-15 11:56:00.227033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.216 [2024-07-15 11:56:00.227050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.216 [2024-07-15 11:56:00.227060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.216 [2024-07-15 11:56:00.227072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.216 [2024-07-15 11:56:00.227091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.216 qpair failed and we were unable to recover it. 00:29:32.216 [2024-07-15 11:56:00.236937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.216 [2024-07-15 11:56:00.237018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.216 [2024-07-15 11:56:00.237035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.216 [2024-07-15 11:56:00.237045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.216 [2024-07-15 11:56:00.237054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.216 [2024-07-15 11:56:00.237072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.216 qpair failed and we were unable to recover it. 00:29:32.216 [2024-07-15 11:56:00.246986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.216 [2024-07-15 11:56:00.247080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.216 [2024-07-15 11:56:00.247097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.216 [2024-07-15 11:56:00.247107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.216 [2024-07-15 11:56:00.247116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.216 [2024-07-15 11:56:00.247135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.216 qpair failed and we were unable to recover it. 00:29:32.216 [2024-07-15 11:56:00.257036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.216 [2024-07-15 11:56:00.257120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.216 [2024-07-15 11:56:00.257137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.216 [2024-07-15 11:56:00.257147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.216 [2024-07-15 11:56:00.257156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.216 [2024-07-15 11:56:00.257175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.216 qpair failed and we were unable to recover it. 00:29:32.216 [2024-07-15 11:56:00.267087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.216 [2024-07-15 11:56:00.267167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.216 [2024-07-15 11:56:00.267185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.216 [2024-07-15 11:56:00.267194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.216 [2024-07-15 11:56:00.267203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.216 [2024-07-15 11:56:00.267221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.216 qpair failed and we were unable to recover it. 00:29:32.216 [2024-07-15 11:56:00.277089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.216 [2024-07-15 11:56:00.277180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.216 [2024-07-15 11:56:00.277197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.216 [2024-07-15 11:56:00.277207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.216 [2024-07-15 11:56:00.277216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.216 [2024-07-15 11:56:00.277234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.216 qpair failed and we were unable to recover it. 00:29:32.216 [2024-07-15 11:56:00.287145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.216 [2024-07-15 11:56:00.287241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.216 [2024-07-15 11:56:00.287258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.216 [2024-07-15 11:56:00.287268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.216 [2024-07-15 11:56:00.287276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.216 [2024-07-15 11:56:00.287295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.216 qpair failed and we were unable to recover it. 00:29:32.216 [2024-07-15 11:56:00.297170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.216 [2024-07-15 11:56:00.297252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.216 [2024-07-15 11:56:00.297270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.216 [2024-07-15 11:56:00.297280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.216 [2024-07-15 11:56:00.297289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.216 [2024-07-15 11:56:00.297307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.216 qpair failed and we were unable to recover it. 00:29:32.216 [2024-07-15 11:56:00.307152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.216 [2024-07-15 11:56:00.307244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.216 [2024-07-15 11:56:00.307261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.216 [2024-07-15 11:56:00.307271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.216 [2024-07-15 11:56:00.307279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.216 [2024-07-15 11:56:00.307298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.216 qpair failed and we were unable to recover it. 00:29:32.217 [2024-07-15 11:56:00.317194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.217 [2024-07-15 11:56:00.317286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.217 [2024-07-15 11:56:00.317304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.217 [2024-07-15 11:56:00.317318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.217 [2024-07-15 11:56:00.317327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.217 [2024-07-15 11:56:00.317346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.217 qpair failed and we were unable to recover it. 00:29:32.477 [2024-07-15 11:56:00.327216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.477 [2024-07-15 11:56:00.327311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.477 [2024-07-15 11:56:00.327328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.477 [2024-07-15 11:56:00.327338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.477 [2024-07-15 11:56:00.327347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.477 [2024-07-15 11:56:00.327365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.477 qpair failed and we were unable to recover it. 00:29:32.477 [2024-07-15 11:56:00.337358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.477 [2024-07-15 11:56:00.337484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.477 [2024-07-15 11:56:00.337503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.477 [2024-07-15 11:56:00.337513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.477 [2024-07-15 11:56:00.337522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.477 [2024-07-15 11:56:00.337541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.477 qpair failed and we were unable to recover it. 00:29:32.477 [2024-07-15 11:56:00.347198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.477 [2024-07-15 11:56:00.347281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.477 [2024-07-15 11:56:00.347299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.477 [2024-07-15 11:56:00.347309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.477 [2024-07-15 11:56:00.347317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.477 [2024-07-15 11:56:00.347334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.477 qpair failed and we were unable to recover it. 00:29:32.477 [2024-07-15 11:56:00.357339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.477 [2024-07-15 11:56:00.357453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.477 [2024-07-15 11:56:00.357472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.477 [2024-07-15 11:56:00.357482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.477 [2024-07-15 11:56:00.357491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.478 [2024-07-15 11:56:00.357509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-07-15 11:56:00.367324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-07-15 11:56:00.367417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-07-15 11:56:00.367435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-07-15 11:56:00.367444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-07-15 11:56:00.367453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.478 [2024-07-15 11:56:00.367473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-07-15 11:56:00.377402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-07-15 11:56:00.377486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-07-15 11:56:00.377504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-07-15 11:56:00.377514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-07-15 11:56:00.377522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:32.478 [2024-07-15 11:56:00.377540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-07-15 11:56:00.387374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-07-15 11:56:00.387482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-07-15 11:56:00.387514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-07-15 11:56:00.387529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-07-15 11:56:00.387542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.478 [2024-07-15 11:56:00.387569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-07-15 11:56:00.397442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-07-15 11:56:00.397522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-07-15 11:56:00.397541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-07-15 11:56:00.397551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-07-15 11:56:00.397560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.478 [2024-07-15 11:56:00.397578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-07-15 11:56:00.407464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-07-15 11:56:00.407547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-07-15 11:56:00.407569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-07-15 11:56:00.407579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-07-15 11:56:00.407588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.478 [2024-07-15 11:56:00.407606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-07-15 11:56:00.417480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-07-15 11:56:00.417565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-07-15 11:56:00.417582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-07-15 11:56:00.417593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-07-15 11:56:00.417601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.478 [2024-07-15 11:56:00.417618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-07-15 11:56:00.427520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-07-15 11:56:00.427596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-07-15 11:56:00.427615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-07-15 11:56:00.427625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-07-15 11:56:00.427633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.478 [2024-07-15 11:56:00.427651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-07-15 11:56:00.437563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-07-15 11:56:00.437646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-07-15 11:56:00.437664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-07-15 11:56:00.437674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-07-15 11:56:00.437683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.478 [2024-07-15 11:56:00.437699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-07-15 11:56:00.447645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-07-15 11:56:00.447728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-07-15 11:56:00.447746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-07-15 11:56:00.447756] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-07-15 11:56:00.447764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.478 [2024-07-15 11:56:00.447781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-07-15 11:56:00.457585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.478 [2024-07-15 11:56:00.457671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.478 [2024-07-15 11:56:00.457689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.478 [2024-07-15 11:56:00.457698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.478 [2024-07-15 11:56:00.457707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.478 [2024-07-15 11:56:00.457724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.478 qpair failed and we were unable to recover it. 00:29:32.478 [2024-07-15 11:56:00.467622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.479 [2024-07-15 11:56:00.467701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.479 [2024-07-15 11:56:00.467719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.479 [2024-07-15 11:56:00.467729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.479 [2024-07-15 11:56:00.467738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.479 [2024-07-15 11:56:00.467755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.479 qpair failed and we were unable to recover it. 00:29:32.479 [2024-07-15 11:56:00.477640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.479 [2024-07-15 11:56:00.477722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.479 [2024-07-15 11:56:00.477741] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.479 [2024-07-15 11:56:00.477751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.479 [2024-07-15 11:56:00.477760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.479 [2024-07-15 11:56:00.477777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.479 qpair failed and we were unable to recover it. 00:29:32.479 [2024-07-15 11:56:00.487792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.479 [2024-07-15 11:56:00.487882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.479 [2024-07-15 11:56:00.487900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.479 [2024-07-15 11:56:00.487910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.479 [2024-07-15 11:56:00.487919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.479 [2024-07-15 11:56:00.487938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.479 qpair failed and we were unable to recover it. 00:29:32.479 [2024-07-15 11:56:00.497707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.479 [2024-07-15 11:56:00.497794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.479 [2024-07-15 11:56:00.497815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.479 [2024-07-15 11:56:00.497826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.479 [2024-07-15 11:56:00.497839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.479 [2024-07-15 11:56:00.497856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.479 qpair failed and we were unable to recover it. 00:29:32.479 [2024-07-15 11:56:00.507743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.479 [2024-07-15 11:56:00.507824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.479 [2024-07-15 11:56:00.507847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.479 [2024-07-15 11:56:00.507857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.479 [2024-07-15 11:56:00.507866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.479 [2024-07-15 11:56:00.507884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.479 qpair failed and we were unable to recover it. 00:29:32.479 [2024-07-15 11:56:00.517809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.479 [2024-07-15 11:56:00.517922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.479 [2024-07-15 11:56:00.517941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.479 [2024-07-15 11:56:00.517951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.479 [2024-07-15 11:56:00.517960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.479 [2024-07-15 11:56:00.517978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.479 qpair failed and we were unable to recover it. 00:29:32.479 [2024-07-15 11:56:00.527801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.479 [2024-07-15 11:56:00.527888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.479 [2024-07-15 11:56:00.527907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.479 [2024-07-15 11:56:00.527916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.479 [2024-07-15 11:56:00.527925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.479 [2024-07-15 11:56:00.527943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.479 qpair failed and we were unable to recover it. 00:29:32.479 [2024-07-15 11:56:00.537765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.479 [2024-07-15 11:56:00.537852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.479 [2024-07-15 11:56:00.537873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.479 [2024-07-15 11:56:00.537883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.479 [2024-07-15 11:56:00.537892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.479 [2024-07-15 11:56:00.537912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.479 qpair failed and we were unable to recover it. 00:29:32.479 [2024-07-15 11:56:00.547861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.479 [2024-07-15 11:56:00.547945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.479 [2024-07-15 11:56:00.547965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.479 [2024-07-15 11:56:00.547976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.479 [2024-07-15 11:56:00.547986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.479 [2024-07-15 11:56:00.548005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.479 qpair failed and we were unable to recover it. 00:29:32.479 [2024-07-15 11:56:00.557840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.479 [2024-07-15 11:56:00.557925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.479 [2024-07-15 11:56:00.557943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.479 [2024-07-15 11:56:00.557953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.479 [2024-07-15 11:56:00.557963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.479 [2024-07-15 11:56:00.557980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.479 qpair failed and we were unable to recover it. 00:29:32.479 [2024-07-15 11:56:00.568000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.479 [2024-07-15 11:56:00.568084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.479 [2024-07-15 11:56:00.568104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.479 [2024-07-15 11:56:00.568115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.479 [2024-07-15 11:56:00.568125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.479 [2024-07-15 11:56:00.568143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.479 qpair failed and we were unable to recover it. 00:29:32.479 [2024-07-15 11:56:00.577968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.479 [2024-07-15 11:56:00.578097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.479 [2024-07-15 11:56:00.578116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.479 [2024-07-15 11:56:00.578126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.479 [2024-07-15 11:56:00.578135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.480 [2024-07-15 11:56:00.578152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.480 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 11:56:00.587942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.741 [2024-07-15 11:56:00.588017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.741 [2024-07-15 11:56:00.588038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.741 [2024-07-15 11:56:00.588048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.741 [2024-07-15 11:56:00.588058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.741 [2024-07-15 11:56:00.588075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 11:56:00.597943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.741 [2024-07-15 11:56:00.598025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.741 [2024-07-15 11:56:00.598043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.741 [2024-07-15 11:56:00.598053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.741 [2024-07-15 11:56:00.598062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.741 [2024-07-15 11:56:00.598079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 11:56:00.608028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.741 [2024-07-15 11:56:00.608115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.741 [2024-07-15 11:56:00.608134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.741 [2024-07-15 11:56:00.608145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.741 [2024-07-15 11:56:00.608154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.741 [2024-07-15 11:56:00.608171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 11:56:00.618049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.741 [2024-07-15 11:56:00.618134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.741 [2024-07-15 11:56:00.618154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.741 [2024-07-15 11:56:00.618163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.741 [2024-07-15 11:56:00.618173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.741 [2024-07-15 11:56:00.618190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 11:56:00.628042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.741 [2024-07-15 11:56:00.628169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.741 [2024-07-15 11:56:00.628188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.741 [2024-07-15 11:56:00.628198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.741 [2024-07-15 11:56:00.628207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.741 [2024-07-15 11:56:00.628229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 11:56:00.638040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.741 [2024-07-15 11:56:00.638124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.741 [2024-07-15 11:56:00.638142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.741 [2024-07-15 11:56:00.638152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.741 [2024-07-15 11:56:00.638161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.741 [2024-07-15 11:56:00.638177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 11:56:00.648075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.741 [2024-07-15 11:56:00.648161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.741 [2024-07-15 11:56:00.648178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.741 [2024-07-15 11:56:00.648188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.741 [2024-07-15 11:56:00.648197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.741 [2024-07-15 11:56:00.648214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 11:56:00.658165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.741 [2024-07-15 11:56:00.658243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.741 [2024-07-15 11:56:00.658261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.741 [2024-07-15 11:56:00.658270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.741 [2024-07-15 11:56:00.658279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.741 [2024-07-15 11:56:00.658296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 11:56:00.668192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.741 [2024-07-15 11:56:00.668298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.741 [2024-07-15 11:56:00.668316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.741 [2024-07-15 11:56:00.668325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.741 [2024-07-15 11:56:00.668334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.741 [2024-07-15 11:56:00.668351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 11:56:00.678229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.741 [2024-07-15 11:56:00.678304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.741 [2024-07-15 11:56:00.678329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.741 [2024-07-15 11:56:00.678339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.741 [2024-07-15 11:56:00.678348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.741 [2024-07-15 11:56:00.678365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 11:56:00.688262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.741 [2024-07-15 11:56:00.688374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.741 [2024-07-15 11:56:00.688393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.741 [2024-07-15 11:56:00.688403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.741 [2024-07-15 11:56:00.688413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.741 [2024-07-15 11:56:00.688431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 11:56:00.698255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.741 [2024-07-15 11:56:00.698333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.741 [2024-07-15 11:56:00.698352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.741 [2024-07-15 11:56:00.698361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.741 [2024-07-15 11:56:00.698370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.741 [2024-07-15 11:56:00.698387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 11:56:00.708278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.741 [2024-07-15 11:56:00.708359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.741 [2024-07-15 11:56:00.708377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.741 [2024-07-15 11:56:00.708387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.741 [2024-07-15 11:56:00.708396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.741 [2024-07-15 11:56:00.708412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 11:56:00.718331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.741 [2024-07-15 11:56:00.718412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.741 [2024-07-15 11:56:00.718430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.741 [2024-07-15 11:56:00.718440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.741 [2024-07-15 11:56:00.718449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.741 [2024-07-15 11:56:00.718469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 11:56:00.728348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.741 [2024-07-15 11:56:00.728435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.741 [2024-07-15 11:56:00.728455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.741 [2024-07-15 11:56:00.728465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.741 [2024-07-15 11:56:00.728474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.741 [2024-07-15 11:56:00.728491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 11:56:00.738388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.741 [2024-07-15 11:56:00.738467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.741 [2024-07-15 11:56:00.738485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.741 [2024-07-15 11:56:00.738495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.741 [2024-07-15 11:56:00.738504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.741 [2024-07-15 11:56:00.738522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 11:56:00.748377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.741 [2024-07-15 11:56:00.748459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.741 [2024-07-15 11:56:00.748477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.741 [2024-07-15 11:56:00.748486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.741 [2024-07-15 11:56:00.748495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.741 [2024-07-15 11:56:00.748512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 11:56:00.758424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.741 [2024-07-15 11:56:00.758502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.741 [2024-07-15 11:56:00.758520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.741 [2024-07-15 11:56:00.758529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.741 [2024-07-15 11:56:00.758539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.741 [2024-07-15 11:56:00.758555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.741 qpair failed and we were unable to recover it. 00:29:32.741 [2024-07-15 11:56:00.768451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.741 [2024-07-15 11:56:00.768530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.741 [2024-07-15 11:56:00.768551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.741 [2024-07-15 11:56:00.768561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.742 [2024-07-15 11:56:00.768569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.742 [2024-07-15 11:56:00.768586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.742 qpair failed and we were unable to recover it. 00:29:32.742 [2024-07-15 11:56:00.778565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.742 [2024-07-15 11:56:00.778646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.742 [2024-07-15 11:56:00.778664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.742 [2024-07-15 11:56:00.778675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.742 [2024-07-15 11:56:00.778683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.742 [2024-07-15 11:56:00.778701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.742 qpair failed and we were unable to recover it. 00:29:32.742 [2024-07-15 11:56:00.788455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.742 [2024-07-15 11:56:00.788530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.742 [2024-07-15 11:56:00.788549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.742 [2024-07-15 11:56:00.788559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.742 [2024-07-15 11:56:00.788568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.742 [2024-07-15 11:56:00.788585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.742 qpair failed and we were unable to recover it. 00:29:32.742 [2024-07-15 11:56:00.798579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.742 [2024-07-15 11:56:00.798659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.742 [2024-07-15 11:56:00.798677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.742 [2024-07-15 11:56:00.798687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.742 [2024-07-15 11:56:00.798697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.742 [2024-07-15 11:56:00.798714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.742 qpair failed and we were unable to recover it. 00:29:32.742 [2024-07-15 11:56:00.808563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.742 [2024-07-15 11:56:00.808642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.742 [2024-07-15 11:56:00.808660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.742 [2024-07-15 11:56:00.808670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.742 [2024-07-15 11:56:00.808682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.742 [2024-07-15 11:56:00.808700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.742 qpair failed and we were unable to recover it. 00:29:32.742 [2024-07-15 11:56:00.818539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.742 [2024-07-15 11:56:00.818625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.742 [2024-07-15 11:56:00.818643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.742 [2024-07-15 11:56:00.818653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.742 [2024-07-15 11:56:00.818662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.742 [2024-07-15 11:56:00.818679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.742 qpair failed and we were unable to recover it. 00:29:32.742 [2024-07-15 11:56:00.828649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.742 [2024-07-15 11:56:00.828732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.742 [2024-07-15 11:56:00.828750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.742 [2024-07-15 11:56:00.828760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.742 [2024-07-15 11:56:00.828768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.742 [2024-07-15 11:56:00.828785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.742 qpair failed and we were unable to recover it. 00:29:32.742 [2024-07-15 11:56:00.838695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.742 [2024-07-15 11:56:00.838780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.742 [2024-07-15 11:56:00.838798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.742 [2024-07-15 11:56:00.838808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.742 [2024-07-15 11:56:00.838817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:32.742 [2024-07-15 11:56:00.838837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:32.742 qpair failed and we were unable to recover it. 00:29:33.001 [2024-07-15 11:56:00.848658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.002 [2024-07-15 11:56:00.848827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.002 [2024-07-15 11:56:00.848850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.002 [2024-07-15 11:56:00.848860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.002 [2024-07-15 11:56:00.848869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.002 [2024-07-15 11:56:00.848887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 11:56:00.858764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.002 [2024-07-15 11:56:00.858849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.002 [2024-07-15 11:56:00.858868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.002 [2024-07-15 11:56:00.858878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.002 [2024-07-15 11:56:00.858887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.002 [2024-07-15 11:56:00.858904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 11:56:00.868767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.002 [2024-07-15 11:56:00.868856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.002 [2024-07-15 11:56:00.868874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.002 [2024-07-15 11:56:00.868885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.002 [2024-07-15 11:56:00.868894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.002 [2024-07-15 11:56:00.868911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 11:56:00.878779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.002 [2024-07-15 11:56:00.878866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.002 [2024-07-15 11:56:00.878884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.002 [2024-07-15 11:56:00.878893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.002 [2024-07-15 11:56:00.878902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.002 [2024-07-15 11:56:00.878920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 11:56:00.888842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.002 [2024-07-15 11:56:00.888925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.002 [2024-07-15 11:56:00.888943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.002 [2024-07-15 11:56:00.888953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.002 [2024-07-15 11:56:00.888962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.002 [2024-07-15 11:56:00.888979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 11:56:00.898849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.002 [2024-07-15 11:56:00.898932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.002 [2024-07-15 11:56:00.898951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.002 [2024-07-15 11:56:00.898961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.002 [2024-07-15 11:56:00.898973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.002 [2024-07-15 11:56:00.898991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 11:56:00.908878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.002 [2024-07-15 11:56:00.908957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.002 [2024-07-15 11:56:00.908975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.002 [2024-07-15 11:56:00.908985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.002 [2024-07-15 11:56:00.908994] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.002 [2024-07-15 11:56:00.909011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 11:56:00.918916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.002 [2024-07-15 11:56:00.918999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.002 [2024-07-15 11:56:00.919017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.002 [2024-07-15 11:56:00.919027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.002 [2024-07-15 11:56:00.919036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.002 [2024-07-15 11:56:00.919053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 11:56:00.928868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.002 [2024-07-15 11:56:00.928951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.002 [2024-07-15 11:56:00.928969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.002 [2024-07-15 11:56:00.928979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.002 [2024-07-15 11:56:00.928988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.002 [2024-07-15 11:56:00.929004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 11:56:00.938939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.002 [2024-07-15 11:56:00.939026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.002 [2024-07-15 11:56:00.939044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.002 [2024-07-15 11:56:00.939054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.002 [2024-07-15 11:56:00.939062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.002 [2024-07-15 11:56:00.939079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 11:56:00.948991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.002 [2024-07-15 11:56:00.949076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.002 [2024-07-15 11:56:00.949094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.002 [2024-07-15 11:56:00.949103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.002 [2024-07-15 11:56:00.949112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.002 [2024-07-15 11:56:00.949129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 11:56:00.959004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.002 [2024-07-15 11:56:00.959082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.002 [2024-07-15 11:56:00.959100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.002 [2024-07-15 11:56:00.959110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.002 [2024-07-15 11:56:00.959118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.002 [2024-07-15 11:56:00.959135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 11:56:00.969054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.002 [2024-07-15 11:56:00.969138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.002 [2024-07-15 11:56:00.969155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.002 [2024-07-15 11:56:00.969165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.002 [2024-07-15 11:56:00.969174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.002 [2024-07-15 11:56:00.969191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 11:56:00.979034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.002 [2024-07-15 11:56:00.979117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.002 [2024-07-15 11:56:00.979134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.002 [2024-07-15 11:56:00.979144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.002 [2024-07-15 11:56:00.979153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.002 [2024-07-15 11:56:00.979170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.002 qpair failed and we were unable to recover it. 00:29:33.002 [2024-07-15 11:56:00.989045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.002 [2024-07-15 11:56:00.989122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.002 [2024-07-15 11:56:00.989141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.002 [2024-07-15 11:56:00.989151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.003 [2024-07-15 11:56:00.989162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.003 [2024-07-15 11:56:00.989179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 11:56:00.999118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.003 [2024-07-15 11:56:00.999204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.003 [2024-07-15 11:56:00.999223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.003 [2024-07-15 11:56:00.999233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.003 [2024-07-15 11:56:00.999242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.003 [2024-07-15 11:56:00.999259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 11:56:01.009140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.003 [2024-07-15 11:56:01.009222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.003 [2024-07-15 11:56:01.009240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.003 [2024-07-15 11:56:01.009250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.003 [2024-07-15 11:56:01.009259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.003 [2024-07-15 11:56:01.009275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 11:56:01.019181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.003 [2024-07-15 11:56:01.019264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.003 [2024-07-15 11:56:01.019282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.003 [2024-07-15 11:56:01.019292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.003 [2024-07-15 11:56:01.019301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.003 [2024-07-15 11:56:01.019319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 11:56:01.029216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.003 [2024-07-15 11:56:01.029307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.003 [2024-07-15 11:56:01.029325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.003 [2024-07-15 11:56:01.029335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.003 [2024-07-15 11:56:01.029344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.003 [2024-07-15 11:56:01.029361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 11:56:01.039252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.003 [2024-07-15 11:56:01.039332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.003 [2024-07-15 11:56:01.039351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.003 [2024-07-15 11:56:01.039360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.003 [2024-07-15 11:56:01.039369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.003 [2024-07-15 11:56:01.039386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 11:56:01.049283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.003 [2024-07-15 11:56:01.049363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.003 [2024-07-15 11:56:01.049380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.003 [2024-07-15 11:56:01.049390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.003 [2024-07-15 11:56:01.049399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.003 [2024-07-15 11:56:01.049416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 11:56:01.059287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.003 [2024-07-15 11:56:01.059369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.003 [2024-07-15 11:56:01.059387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.003 [2024-07-15 11:56:01.059397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.003 [2024-07-15 11:56:01.059406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.003 [2024-07-15 11:56:01.059423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 11:56:01.069314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.003 [2024-07-15 11:56:01.069396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.003 [2024-07-15 11:56:01.069416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.003 [2024-07-15 11:56:01.069426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.003 [2024-07-15 11:56:01.069435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.003 [2024-07-15 11:56:01.069451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 11:56:01.079368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.003 [2024-07-15 11:56:01.079443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.003 [2024-07-15 11:56:01.079461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.003 [2024-07-15 11:56:01.079475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.003 [2024-07-15 11:56:01.079483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.003 [2024-07-15 11:56:01.079500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 11:56:01.089398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.003 [2024-07-15 11:56:01.089479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.003 [2024-07-15 11:56:01.089497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.003 [2024-07-15 11:56:01.089507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.003 [2024-07-15 11:56:01.089516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.003 [2024-07-15 11:56:01.089532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.003 [2024-07-15 11:56:01.099402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.003 [2024-07-15 11:56:01.099488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.003 [2024-07-15 11:56:01.099508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.003 [2024-07-15 11:56:01.099518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.003 [2024-07-15 11:56:01.099527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.003 [2024-07-15 11:56:01.099545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.003 qpair failed and we were unable to recover it. 00:29:33.263 [2024-07-15 11:56:01.109377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.263 [2024-07-15 11:56:01.109454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.263 [2024-07-15 11:56:01.109472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.263 [2024-07-15 11:56:01.109482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.263 [2024-07-15 11:56:01.109491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.263 [2024-07-15 11:56:01.109508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.263 qpair failed and we were unable to recover it. 00:29:33.263 [2024-07-15 11:56:01.119475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.263 [2024-07-15 11:56:01.119556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.263 [2024-07-15 11:56:01.119574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.263 [2024-07-15 11:56:01.119584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.263 [2024-07-15 11:56:01.119592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.263 [2024-07-15 11:56:01.119609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.263 qpair failed and we were unable to recover it. 00:29:33.263 [2024-07-15 11:56:01.129520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.263 [2024-07-15 11:56:01.129603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.263 [2024-07-15 11:56:01.129621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.263 [2024-07-15 11:56:01.129631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.263 [2024-07-15 11:56:01.129640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.263 [2024-07-15 11:56:01.129656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.263 qpair failed and we were unable to recover it. 00:29:33.263 [2024-07-15 11:56:01.139561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.263 [2024-07-15 11:56:01.139639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.263 [2024-07-15 11:56:01.139657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.263 [2024-07-15 11:56:01.139667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.263 [2024-07-15 11:56:01.139676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.263 [2024-07-15 11:56:01.139692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.263 qpair failed and we were unable to recover it. 00:29:33.263 [2024-07-15 11:56:01.149559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.263 [2024-07-15 11:56:01.149650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.263 [2024-07-15 11:56:01.149668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.263 [2024-07-15 11:56:01.149677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.263 [2024-07-15 11:56:01.149686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.263 [2024-07-15 11:56:01.149704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.263 qpair failed and we were unable to recover it. 00:29:33.263 [2024-07-15 11:56:01.159602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.263 [2024-07-15 11:56:01.159684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.263 [2024-07-15 11:56:01.159702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.263 [2024-07-15 11:56:01.159711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.263 [2024-07-15 11:56:01.159720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.263 [2024-07-15 11:56:01.159736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.263 qpair failed and we were unable to recover it. 00:29:33.263 [2024-07-15 11:56:01.169611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.263 [2024-07-15 11:56:01.169692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.263 [2024-07-15 11:56:01.169710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.263 [2024-07-15 11:56:01.169723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.263 [2024-07-15 11:56:01.169732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.263 [2024-07-15 11:56:01.169749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.263 qpair failed and we were unable to recover it. 00:29:33.263 [2024-07-15 11:56:01.179656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.263 [2024-07-15 11:56:01.179775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.263 [2024-07-15 11:56:01.179795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.263 [2024-07-15 11:56:01.179804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.263 [2024-07-15 11:56:01.179813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.263 [2024-07-15 11:56:01.179831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.263 qpair failed and we were unable to recover it. 00:29:33.263 [2024-07-15 11:56:01.189716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.263 [2024-07-15 11:56:01.189800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.263 [2024-07-15 11:56:01.189819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.263 [2024-07-15 11:56:01.189828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.263 [2024-07-15 11:56:01.189841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.263 [2024-07-15 11:56:01.189859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.263 qpair failed and we were unable to recover it. 00:29:33.263 [2024-07-15 11:56:01.199725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.263 [2024-07-15 11:56:01.199806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.263 [2024-07-15 11:56:01.199824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.263 [2024-07-15 11:56:01.199838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.263 [2024-07-15 11:56:01.199847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.263 [2024-07-15 11:56:01.199865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.263 qpair failed and we were unable to recover it. 00:29:33.263 [2024-07-15 11:56:01.209727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.263 [2024-07-15 11:56:01.209806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.263 [2024-07-15 11:56:01.209824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.263 [2024-07-15 11:56:01.209838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.263 [2024-07-15 11:56:01.209847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.263 [2024-07-15 11:56:01.209864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.263 qpair failed and we were unable to recover it. 00:29:33.263 [2024-07-15 11:56:01.219679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.263 [2024-07-15 11:56:01.219756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.263 [2024-07-15 11:56:01.219774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.263 [2024-07-15 11:56:01.219784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.263 [2024-07-15 11:56:01.219792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.263 [2024-07-15 11:56:01.219809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.263 qpair failed and we were unable to recover it. 00:29:33.263 [2024-07-15 11:56:01.229781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.263 [2024-07-15 11:56:01.229866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.263 [2024-07-15 11:56:01.229884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.263 [2024-07-15 11:56:01.229894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.263 [2024-07-15 11:56:01.229902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.263 [2024-07-15 11:56:01.229919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.263 qpair failed and we were unable to recover it. 00:29:33.263 [2024-07-15 11:56:01.239842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.263 [2024-07-15 11:56:01.239923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.263 [2024-07-15 11:56:01.239941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.263 [2024-07-15 11:56:01.239951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.263 [2024-07-15 11:56:01.239960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.263 [2024-07-15 11:56:01.239977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.263 qpair failed and we were unable to recover it. 00:29:33.263 [2024-07-15 11:56:01.249893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.264 [2024-07-15 11:56:01.250006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.264 [2024-07-15 11:56:01.250024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.264 [2024-07-15 11:56:01.250033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.264 [2024-07-15 11:56:01.250043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.264 [2024-07-15 11:56:01.250060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.264 qpair failed and we were unable to recover it. 00:29:33.264 [2024-07-15 11:56:01.259879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.264 [2024-07-15 11:56:01.259960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.264 [2024-07-15 11:56:01.259978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.264 [2024-07-15 11:56:01.259991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.264 [2024-07-15 11:56:01.259999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.264 [2024-07-15 11:56:01.260016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.264 qpair failed and we were unable to recover it. 00:29:33.264 [2024-07-15 11:56:01.269904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.264 [2024-07-15 11:56:01.269987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.264 [2024-07-15 11:56:01.270005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.264 [2024-07-15 11:56:01.270015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.264 [2024-07-15 11:56:01.270023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.264 [2024-07-15 11:56:01.270040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.264 qpair failed and we were unable to recover it. 00:29:33.264 [2024-07-15 11:56:01.279947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.264 [2024-07-15 11:56:01.280026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.264 [2024-07-15 11:56:01.280044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.264 [2024-07-15 11:56:01.280053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.264 [2024-07-15 11:56:01.280062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.264 [2024-07-15 11:56:01.280079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.264 qpair failed and we were unable to recover it. 00:29:33.264 [2024-07-15 11:56:01.289999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.264 [2024-07-15 11:56:01.290078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.264 [2024-07-15 11:56:01.290095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.264 [2024-07-15 11:56:01.290106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.264 [2024-07-15 11:56:01.290114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.264 [2024-07-15 11:56:01.290131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.264 qpair failed and we were unable to recover it. 00:29:33.264 [2024-07-15 11:56:01.300099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.264 [2024-07-15 11:56:01.300257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.264 [2024-07-15 11:56:01.300279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.264 [2024-07-15 11:56:01.300289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.264 [2024-07-15 11:56:01.300298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.264 [2024-07-15 11:56:01.300317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.264 qpair failed and we were unable to recover it. 00:29:33.264 [2024-07-15 11:56:01.310027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.264 [2024-07-15 11:56:01.310111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.264 [2024-07-15 11:56:01.310130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.264 [2024-07-15 11:56:01.310140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.264 [2024-07-15 11:56:01.310149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.264 [2024-07-15 11:56:01.310166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.264 qpair failed and we were unable to recover it. 00:29:33.264 [2024-07-15 11:56:01.320023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.264 [2024-07-15 11:56:01.320119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.264 [2024-07-15 11:56:01.320137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.264 [2024-07-15 11:56:01.320146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.264 [2024-07-15 11:56:01.320155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.264 [2024-07-15 11:56:01.320174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.264 qpair failed and we were unable to recover it. 00:29:33.264 [2024-07-15 11:56:01.330092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.264 [2024-07-15 11:56:01.330171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.264 [2024-07-15 11:56:01.330189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.264 [2024-07-15 11:56:01.330199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.264 [2024-07-15 11:56:01.330207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.264 [2024-07-15 11:56:01.330224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.264 qpair failed and we were unable to recover it. 00:29:33.264 [2024-07-15 11:56:01.340034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.264 [2024-07-15 11:56:01.340116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.264 [2024-07-15 11:56:01.340133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.264 [2024-07-15 11:56:01.340143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.264 [2024-07-15 11:56:01.340152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.264 [2024-07-15 11:56:01.340168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.264 qpair failed and we were unable to recover it. 00:29:33.264 [2024-07-15 11:56:01.350137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.264 [2024-07-15 11:56:01.350220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.264 [2024-07-15 11:56:01.350241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.264 [2024-07-15 11:56:01.350251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.264 [2024-07-15 11:56:01.350259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.264 [2024-07-15 11:56:01.350277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.264 qpair failed and we were unable to recover it. 00:29:33.264 [2024-07-15 11:56:01.360186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.264 [2024-07-15 11:56:01.360264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.264 [2024-07-15 11:56:01.360282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.264 [2024-07-15 11:56:01.360292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.264 [2024-07-15 11:56:01.360301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.264 [2024-07-15 11:56:01.360318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.264 qpair failed and we were unable to recover it. 00:29:33.524 [2024-07-15 11:56:01.370201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.524 [2024-07-15 11:56:01.370284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.524 [2024-07-15 11:56:01.370302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.524 [2024-07-15 11:56:01.370312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.524 [2024-07-15 11:56:01.370321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.524 [2024-07-15 11:56:01.370338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.524 qpair failed and we were unable to recover it. 00:29:33.524 [2024-07-15 11:56:01.380227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.524 [2024-07-15 11:56:01.380318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.524 [2024-07-15 11:56:01.380336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.524 [2024-07-15 11:56:01.380346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.524 [2024-07-15 11:56:01.380355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.524 [2024-07-15 11:56:01.380373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.524 qpair failed and we were unable to recover it. 00:29:33.524 [2024-07-15 11:56:01.390307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.524 [2024-07-15 11:56:01.390384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.524 [2024-07-15 11:56:01.390403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.524 [2024-07-15 11:56:01.390412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.524 [2024-07-15 11:56:01.390421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.524 [2024-07-15 11:56:01.390438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.524 qpair failed and we were unable to recover it. 00:29:33.524 [2024-07-15 11:56:01.400294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.524 [2024-07-15 11:56:01.400375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.524 [2024-07-15 11:56:01.400393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.524 [2024-07-15 11:56:01.400403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.524 [2024-07-15 11:56:01.400412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.524 [2024-07-15 11:56:01.400429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.524 qpair failed and we were unable to recover it. 00:29:33.524 [2024-07-15 11:56:01.410248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.524 [2024-07-15 11:56:01.410327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.524 [2024-07-15 11:56:01.410345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.524 [2024-07-15 11:56:01.410355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.524 [2024-07-15 11:56:01.410364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.524 [2024-07-15 11:56:01.410381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.524 qpair failed and we were unable to recover it. 00:29:33.524 [2024-07-15 11:56:01.420349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.524 [2024-07-15 11:56:01.420430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.524 [2024-07-15 11:56:01.420448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.524 [2024-07-15 11:56:01.420458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.524 [2024-07-15 11:56:01.420467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.524 [2024-07-15 11:56:01.420484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.524 qpair failed and we were unable to recover it. 00:29:33.524 [2024-07-15 11:56:01.430382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.524 [2024-07-15 11:56:01.430507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.524 [2024-07-15 11:56:01.430526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.524 [2024-07-15 11:56:01.430535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.524 [2024-07-15 11:56:01.430544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.524 [2024-07-15 11:56:01.430562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.524 qpair failed and we were unable to recover it. 00:29:33.524 [2024-07-15 11:56:01.440392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.524 [2024-07-15 11:56:01.440556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.524 [2024-07-15 11:56:01.440581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.524 [2024-07-15 11:56:01.440591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.524 [2024-07-15 11:56:01.440600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.524 [2024-07-15 11:56:01.440617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.524 qpair failed and we were unable to recover it. 00:29:33.524 [2024-07-15 11:56:01.450442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.524 [2024-07-15 11:56:01.450522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.524 [2024-07-15 11:56:01.450540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.524 [2024-07-15 11:56:01.450551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.524 [2024-07-15 11:56:01.450560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.524 [2024-07-15 11:56:01.450577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.524 qpair failed and we were unable to recover it. 00:29:33.524 [2024-07-15 11:56:01.460461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.524 [2024-07-15 11:56:01.460551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.524 [2024-07-15 11:56:01.460569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.524 [2024-07-15 11:56:01.460578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.524 [2024-07-15 11:56:01.460587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.524 [2024-07-15 11:56:01.460604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.524 qpair failed and we were unable to recover it. 00:29:33.524 [2024-07-15 11:56:01.470509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.524 [2024-07-15 11:56:01.470590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.524 [2024-07-15 11:56:01.470608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.524 [2024-07-15 11:56:01.470618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.524 [2024-07-15 11:56:01.470627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.524 [2024-07-15 11:56:01.470644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.524 qpair failed and we were unable to recover it. 00:29:33.524 [2024-07-15 11:56:01.480527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.524 [2024-07-15 11:56:01.480609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.524 [2024-07-15 11:56:01.480627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.524 [2024-07-15 11:56:01.480637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.524 [2024-07-15 11:56:01.480645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.524 [2024-07-15 11:56:01.480666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.524 qpair failed and we were unable to recover it. 00:29:33.524 [2024-07-15 11:56:01.490562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.524 [2024-07-15 11:56:01.490651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.524 [2024-07-15 11:56:01.490669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.524 [2024-07-15 11:56:01.490679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.524 [2024-07-15 11:56:01.490688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.524 [2024-07-15 11:56:01.490707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.524 qpair failed and we were unable to recover it. 00:29:33.524 [2024-07-15 11:56:01.500586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.524 [2024-07-15 11:56:01.500665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.525 [2024-07-15 11:56:01.500683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.525 [2024-07-15 11:56:01.500693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.525 [2024-07-15 11:56:01.500702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.525 [2024-07-15 11:56:01.500719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.525 qpair failed and we were unable to recover it. 00:29:33.525 [2024-07-15 11:56:01.510665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.525 [2024-07-15 11:56:01.510749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.525 [2024-07-15 11:56:01.510768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.525 [2024-07-15 11:56:01.510778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.525 [2024-07-15 11:56:01.510787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.525 [2024-07-15 11:56:01.510805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.525 qpair failed and we were unable to recover it. 00:29:33.525 [2024-07-15 11:56:01.520712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.525 [2024-07-15 11:56:01.520796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.525 [2024-07-15 11:56:01.520816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.525 [2024-07-15 11:56:01.520825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.525 [2024-07-15 11:56:01.520845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.525 [2024-07-15 11:56:01.520864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.525 qpair failed and we were unable to recover it. 00:29:33.525 [2024-07-15 11:56:01.530628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.525 [2024-07-15 11:56:01.530739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.525 [2024-07-15 11:56:01.530759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.525 [2024-07-15 11:56:01.530770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.525 [2024-07-15 11:56:01.530779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.525 [2024-07-15 11:56:01.530796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.525 qpair failed and we were unable to recover it. 00:29:33.525 [2024-07-15 11:56:01.540697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.525 [2024-07-15 11:56:01.540872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.525 [2024-07-15 11:56:01.540891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.525 [2024-07-15 11:56:01.540900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.525 [2024-07-15 11:56:01.540910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.525 [2024-07-15 11:56:01.540928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.525 qpair failed and we were unable to recover it. 00:29:33.525 [2024-07-15 11:56:01.550721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.525 [2024-07-15 11:56:01.550803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.525 [2024-07-15 11:56:01.550822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.525 [2024-07-15 11:56:01.550836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.525 [2024-07-15 11:56:01.550846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.525 [2024-07-15 11:56:01.550864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.525 qpair failed and we were unable to recover it. 00:29:33.525 [2024-07-15 11:56:01.560780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.525 [2024-07-15 11:56:01.560867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.525 [2024-07-15 11:56:01.560884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.525 [2024-07-15 11:56:01.560894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.525 [2024-07-15 11:56:01.560902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.525 [2024-07-15 11:56:01.560919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.525 qpair failed and we were unable to recover it. 00:29:33.525 [2024-07-15 11:56:01.570783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.525 [2024-07-15 11:56:01.570868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.525 [2024-07-15 11:56:01.570886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.525 [2024-07-15 11:56:01.570896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.525 [2024-07-15 11:56:01.570905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.525 [2024-07-15 11:56:01.570926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.525 qpair failed and we were unable to recover it. 00:29:33.525 [2024-07-15 11:56:01.580792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.525 [2024-07-15 11:56:01.580878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.525 [2024-07-15 11:56:01.580897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.525 [2024-07-15 11:56:01.580907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.525 [2024-07-15 11:56:01.580916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.525 [2024-07-15 11:56:01.580933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.525 qpair failed and we were unable to recover it. 00:29:33.525 [2024-07-15 11:56:01.590854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.525 [2024-07-15 11:56:01.590931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.525 [2024-07-15 11:56:01.590949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.525 [2024-07-15 11:56:01.590958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.525 [2024-07-15 11:56:01.590967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.525 [2024-07-15 11:56:01.590983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.525 qpair failed and we were unable to recover it. 00:29:33.525 [2024-07-15 11:56:01.600869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.525 [2024-07-15 11:56:01.601030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.525 [2024-07-15 11:56:01.601049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.525 [2024-07-15 11:56:01.601059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.525 [2024-07-15 11:56:01.601068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.525 [2024-07-15 11:56:01.601086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.525 qpair failed and we were unable to recover it. 00:29:33.525 [2024-07-15 11:56:01.610901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.525 [2024-07-15 11:56:01.610986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.525 [2024-07-15 11:56:01.611002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.525 [2024-07-15 11:56:01.611012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.525 [2024-07-15 11:56:01.611021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.525 [2024-07-15 11:56:01.611038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.525 qpair failed and we were unable to recover it. 00:29:33.525 [2024-07-15 11:56:01.620918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.525 [2024-07-15 11:56:01.621001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.525 [2024-07-15 11:56:01.621023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.525 [2024-07-15 11:56:01.621033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.525 [2024-07-15 11:56:01.621042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.525 [2024-07-15 11:56:01.621059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.525 qpair failed and we were unable to recover it. 00:29:33.784 [2024-07-15 11:56:01.630952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.784 [2024-07-15 11:56:01.631062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.784 [2024-07-15 11:56:01.631079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.784 [2024-07-15 11:56:01.631089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.784 [2024-07-15 11:56:01.631098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.785 [2024-07-15 11:56:01.631115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.785 qpair failed and we were unable to recover it. 00:29:33.785 [2024-07-15 11:56:01.641051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.785 [2024-07-15 11:56:01.641134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.785 [2024-07-15 11:56:01.641151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.785 [2024-07-15 11:56:01.641161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.785 [2024-07-15 11:56:01.641169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.785 [2024-07-15 11:56:01.641187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.785 qpair failed and we were unable to recover it. 00:29:33.785 [2024-07-15 11:56:01.650999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.785 [2024-07-15 11:56:01.651101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.785 [2024-07-15 11:56:01.651119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.785 [2024-07-15 11:56:01.651128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.785 [2024-07-15 11:56:01.651137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.785 [2024-07-15 11:56:01.651154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.785 qpair failed and we were unable to recover it. 00:29:33.785 [2024-07-15 11:56:01.661040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.785 [2024-07-15 11:56:01.661163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.785 [2024-07-15 11:56:01.661181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.785 [2024-07-15 11:56:01.661191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.785 [2024-07-15 11:56:01.661200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.785 [2024-07-15 11:56:01.661221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.785 qpair failed and we were unable to recover it. 00:29:33.785 [2024-07-15 11:56:01.671158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.785 [2024-07-15 11:56:01.671238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.785 [2024-07-15 11:56:01.671256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.785 [2024-07-15 11:56:01.671265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.785 [2024-07-15 11:56:01.671274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.785 [2024-07-15 11:56:01.671291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.785 qpair failed and we were unable to recover it. 00:29:33.785 [2024-07-15 11:56:01.681094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.785 [2024-07-15 11:56:01.681175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.785 [2024-07-15 11:56:01.681195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.785 [2024-07-15 11:56:01.681205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.785 [2024-07-15 11:56:01.681215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.785 [2024-07-15 11:56:01.681233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.785 qpair failed and we were unable to recover it. 00:29:33.785 [2024-07-15 11:56:01.691092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.785 [2024-07-15 11:56:01.691224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.785 [2024-07-15 11:56:01.691242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.785 [2024-07-15 11:56:01.691252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.785 [2024-07-15 11:56:01.691261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.785 [2024-07-15 11:56:01.691278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.785 qpair failed and we were unable to recover it. 00:29:33.785 [2024-07-15 11:56:01.701153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.785 [2024-07-15 11:56:01.701234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.785 [2024-07-15 11:56:01.701254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.785 [2024-07-15 11:56:01.701263] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.785 [2024-07-15 11:56:01.701272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.785 [2024-07-15 11:56:01.701289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.785 qpair failed and we were unable to recover it. 00:29:33.785 [2024-07-15 11:56:01.711119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.785 [2024-07-15 11:56:01.711197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.785 [2024-07-15 11:56:01.711218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.785 [2024-07-15 11:56:01.711228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.785 [2024-07-15 11:56:01.711237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.785 [2024-07-15 11:56:01.711253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.785 qpair failed and we were unable to recover it. 00:29:33.785 [2024-07-15 11:56:01.721215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.785 [2024-07-15 11:56:01.721323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.785 [2024-07-15 11:56:01.721342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.785 [2024-07-15 11:56:01.721352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.785 [2024-07-15 11:56:01.721361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.785 [2024-07-15 11:56:01.721379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.785 qpair failed and we were unable to recover it. 00:29:33.785 [2024-07-15 11:56:01.731239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.785 [2024-07-15 11:56:01.731323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.785 [2024-07-15 11:56:01.731341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.785 [2024-07-15 11:56:01.731351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.785 [2024-07-15 11:56:01.731360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.785 [2024-07-15 11:56:01.731376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.785 qpair failed and we were unable to recover it. 00:29:33.785 [2024-07-15 11:56:01.741241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.785 [2024-07-15 11:56:01.741324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.785 [2024-07-15 11:56:01.741342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.785 [2024-07-15 11:56:01.741351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.785 [2024-07-15 11:56:01.741360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.785 [2024-07-15 11:56:01.741377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.785 qpair failed and we were unable to recover it. 00:29:33.785 [2024-07-15 11:56:01.751300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.785 [2024-07-15 11:56:01.751381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.785 [2024-07-15 11:56:01.751399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.785 [2024-07-15 11:56:01.751409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.785 [2024-07-15 11:56:01.751420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.785 [2024-07-15 11:56:01.751437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.785 qpair failed and we were unable to recover it. 00:29:33.785 [2024-07-15 11:56:01.761321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.785 [2024-07-15 11:56:01.761412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.785 [2024-07-15 11:56:01.761430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.785 [2024-07-15 11:56:01.761440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.785 [2024-07-15 11:56:01.761448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.786 [2024-07-15 11:56:01.761466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.786 qpair failed and we were unable to recover it. 00:29:33.786 [2024-07-15 11:56:01.771374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.786 [2024-07-15 11:56:01.771456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.786 [2024-07-15 11:56:01.771474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.786 [2024-07-15 11:56:01.771484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.786 [2024-07-15 11:56:01.771492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.786 [2024-07-15 11:56:01.771509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.786 qpair failed and we were unable to recover it. 00:29:33.786 [2024-07-15 11:56:01.781376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.786 [2024-07-15 11:56:01.781460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.786 [2024-07-15 11:56:01.781478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.786 [2024-07-15 11:56:01.781487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.786 [2024-07-15 11:56:01.781496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.786 [2024-07-15 11:56:01.781513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.786 qpair failed and we were unable to recover it. 00:29:33.786 [2024-07-15 11:56:01.791341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.786 [2024-07-15 11:56:01.791423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.786 [2024-07-15 11:56:01.791441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.786 [2024-07-15 11:56:01.791451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.786 [2024-07-15 11:56:01.791460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.786 [2024-07-15 11:56:01.791476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.786 qpair failed and we were unable to recover it. 00:29:33.786 [2024-07-15 11:56:01.801468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.786 [2024-07-15 11:56:01.801560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.786 [2024-07-15 11:56:01.801578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.786 [2024-07-15 11:56:01.801588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.786 [2024-07-15 11:56:01.801596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.786 [2024-07-15 11:56:01.801614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.786 qpair failed and we were unable to recover it. 00:29:33.786 [2024-07-15 11:56:01.811499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.786 [2024-07-15 11:56:01.811580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.786 [2024-07-15 11:56:01.811598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.786 [2024-07-15 11:56:01.811608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.786 [2024-07-15 11:56:01.811616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.786 [2024-07-15 11:56:01.811633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.786 qpair failed and we were unable to recover it. 00:29:33.786 [2024-07-15 11:56:01.821492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.786 [2024-07-15 11:56:01.821578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.786 [2024-07-15 11:56:01.821595] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.786 [2024-07-15 11:56:01.821605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.786 [2024-07-15 11:56:01.821614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.786 [2024-07-15 11:56:01.821631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.786 qpair failed and we were unable to recover it. 00:29:33.786 [2024-07-15 11:56:01.831532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.786 [2024-07-15 11:56:01.831612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.786 [2024-07-15 11:56:01.831630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.786 [2024-07-15 11:56:01.831639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.786 [2024-07-15 11:56:01.831648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.786 [2024-07-15 11:56:01.831665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.786 qpair failed and we were unable to recover it. 00:29:33.786 [2024-07-15 11:56:01.841547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.786 [2024-07-15 11:56:01.841637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.786 [2024-07-15 11:56:01.841655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.786 [2024-07-15 11:56:01.841665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.786 [2024-07-15 11:56:01.841676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.786 [2024-07-15 11:56:01.841693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.786 qpair failed and we were unable to recover it. 00:29:33.786 [2024-07-15 11:56:01.851599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.786 [2024-07-15 11:56:01.851679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.786 [2024-07-15 11:56:01.851697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.786 [2024-07-15 11:56:01.851707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.786 [2024-07-15 11:56:01.851716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.786 [2024-07-15 11:56:01.851733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.786 qpair failed and we were unable to recover it. 00:29:33.786 [2024-07-15 11:56:01.861604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.786 [2024-07-15 11:56:01.861687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.786 [2024-07-15 11:56:01.861705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.786 [2024-07-15 11:56:01.861715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.786 [2024-07-15 11:56:01.861724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.786 [2024-07-15 11:56:01.861741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.786 qpair failed and we were unable to recover it. 00:29:33.786 [2024-07-15 11:56:01.871549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.786 [2024-07-15 11:56:01.871629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.786 [2024-07-15 11:56:01.871646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.786 [2024-07-15 11:56:01.871656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.786 [2024-07-15 11:56:01.871665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.786 [2024-07-15 11:56:01.871682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.786 qpair failed and we were unable to recover it. 00:29:33.786 [2024-07-15 11:56:01.881665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.786 [2024-07-15 11:56:01.881741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.786 [2024-07-15 11:56:01.881759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.786 [2024-07-15 11:56:01.881769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.786 [2024-07-15 11:56:01.881779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:33.786 [2024-07-15 11:56:01.881796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:33.786 qpair failed and we were unable to recover it. 00:29:34.046 [2024-07-15 11:56:01.891702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.046 [2024-07-15 11:56:01.891785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.046 [2024-07-15 11:56:01.891803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.046 [2024-07-15 11:56:01.891813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.046 [2024-07-15 11:56:01.891822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.046 [2024-07-15 11:56:01.891845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.046 qpair failed and we were unable to recover it. 00:29:34.046 [2024-07-15 11:56:01.901716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.046 [2024-07-15 11:56:01.901802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.046 [2024-07-15 11:56:01.901820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.046 [2024-07-15 11:56:01.901830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.046 [2024-07-15 11:56:01.901843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.046 [2024-07-15 11:56:01.901861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.046 qpair failed and we were unable to recover it. 00:29:34.046 [2024-07-15 11:56:01.911738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.046 [2024-07-15 11:56:01.911836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.046 [2024-07-15 11:56:01.911853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.046 [2024-07-15 11:56:01.911863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.046 [2024-07-15 11:56:01.911872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.046 [2024-07-15 11:56:01.911889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.046 qpair failed and we were unable to recover it. 00:29:34.046 [2024-07-15 11:56:01.921779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.046 [2024-07-15 11:56:01.921862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.046 [2024-07-15 11:56:01.921880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.046 [2024-07-15 11:56:01.921890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.046 [2024-07-15 11:56:01.921899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.046 [2024-07-15 11:56:01.921915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.046 qpair failed and we were unable to recover it. 00:29:34.046 [2024-07-15 11:56:01.931810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.046 [2024-07-15 11:56:01.931898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.046 [2024-07-15 11:56:01.931916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.046 [2024-07-15 11:56:01.931926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.046 [2024-07-15 11:56:01.931938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.046 [2024-07-15 11:56:01.931956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.046 qpair failed and we were unable to recover it. 00:29:34.046 [2024-07-15 11:56:01.941829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.046 [2024-07-15 11:56:01.941922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.046 [2024-07-15 11:56:01.941940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.046 [2024-07-15 11:56:01.941950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.046 [2024-07-15 11:56:01.941958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.046 [2024-07-15 11:56:01.941975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.046 qpair failed and we were unable to recover it. 00:29:34.046 [2024-07-15 11:56:01.951897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.046 [2024-07-15 11:56:01.952103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.046 [2024-07-15 11:56:01.952122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.046 [2024-07-15 11:56:01.952132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.046 [2024-07-15 11:56:01.952141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.046 [2024-07-15 11:56:01.952160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.046 qpair failed and we were unable to recover it. 00:29:34.046 [2024-07-15 11:56:01.961819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.046 [2024-07-15 11:56:01.961904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.046 [2024-07-15 11:56:01.961923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.046 [2024-07-15 11:56:01.961932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.046 [2024-07-15 11:56:01.961940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.046 [2024-07-15 11:56:01.961957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.046 qpair failed and we were unable to recover it. 00:29:34.046 [2024-07-15 11:56:01.971856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.046 [2024-07-15 11:56:01.971942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.046 [2024-07-15 11:56:01.971959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.046 [2024-07-15 11:56:01.971969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.046 [2024-07-15 11:56:01.971978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.046 [2024-07-15 11:56:01.971994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.046 qpair failed and we were unable to recover it. 00:29:34.046 [2024-07-15 11:56:01.981942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.046 [2024-07-15 11:56:01.982030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.046 [2024-07-15 11:56:01.982047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.046 [2024-07-15 11:56:01.982057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.046 [2024-07-15 11:56:01.982066] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.046 [2024-07-15 11:56:01.982083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.046 qpair failed and we were unable to recover it. 00:29:34.046 [2024-07-15 11:56:01.991963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.046 [2024-07-15 11:56:01.992041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.046 [2024-07-15 11:56:01.992059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.046 [2024-07-15 11:56:01.992068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.046 [2024-07-15 11:56:01.992077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.046 [2024-07-15 11:56:01.992094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.046 qpair failed and we were unable to recover it. 00:29:34.046 [2024-07-15 11:56:02.002013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.046 [2024-07-15 11:56:02.002113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.046 [2024-07-15 11:56:02.002131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.046 [2024-07-15 11:56:02.002140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.046 [2024-07-15 11:56:02.002149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.046 [2024-07-15 11:56:02.002167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.046 qpair failed and we were unable to recover it. 00:29:34.046 [2024-07-15 11:56:02.012039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.046 [2024-07-15 11:56:02.012119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.047 [2024-07-15 11:56:02.012137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.047 [2024-07-15 11:56:02.012147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.047 [2024-07-15 11:56:02.012156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.047 [2024-07-15 11:56:02.012173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.047 qpair failed and we were unable to recover it. 00:29:34.047 [2024-07-15 11:56:02.022060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.047 [2024-07-15 11:56:02.022143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.047 [2024-07-15 11:56:02.022160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.047 [2024-07-15 11:56:02.022173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.047 [2024-07-15 11:56:02.022181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.047 [2024-07-15 11:56:02.022198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.047 qpair failed and we were unable to recover it. 00:29:34.047 [2024-07-15 11:56:02.032126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.047 [2024-07-15 11:56:02.032207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.047 [2024-07-15 11:56:02.032224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.047 [2024-07-15 11:56:02.032234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.047 [2024-07-15 11:56:02.032242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.047 [2024-07-15 11:56:02.032259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.047 qpair failed and we were unable to recover it. 00:29:34.047 [2024-07-15 11:56:02.042108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.047 [2024-07-15 11:56:02.042190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.047 [2024-07-15 11:56:02.042208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.047 [2024-07-15 11:56:02.042217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.047 [2024-07-15 11:56:02.042226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.047 [2024-07-15 11:56:02.042243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.047 qpair failed and we were unable to recover it. 00:29:34.047 [2024-07-15 11:56:02.052159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.047 [2024-07-15 11:56:02.052330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.047 [2024-07-15 11:56:02.052349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.047 [2024-07-15 11:56:02.052358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.047 [2024-07-15 11:56:02.052367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.047 [2024-07-15 11:56:02.052385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.047 qpair failed and we were unable to recover it. 00:29:34.047 [2024-07-15 11:56:02.062123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.047 [2024-07-15 11:56:02.062203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.047 [2024-07-15 11:56:02.062221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.047 [2024-07-15 11:56:02.062230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.047 [2024-07-15 11:56:02.062239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.047 [2024-07-15 11:56:02.062256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.047 qpair failed and we were unable to recover it. 00:29:34.047 [2024-07-15 11:56:02.072211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.047 [2024-07-15 11:56:02.072290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.047 [2024-07-15 11:56:02.072308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.047 [2024-07-15 11:56:02.072318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.047 [2024-07-15 11:56:02.072327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.047 [2024-07-15 11:56:02.072343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.047 qpair failed and we were unable to recover it. 00:29:34.047 [2024-07-15 11:56:02.082200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.047 [2024-07-15 11:56:02.082282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.047 [2024-07-15 11:56:02.082300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.047 [2024-07-15 11:56:02.082309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.047 [2024-07-15 11:56:02.082318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.047 [2024-07-15 11:56:02.082334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.047 qpair failed and we were unable to recover it. 00:29:34.047 [2024-07-15 11:56:02.092263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.047 [2024-07-15 11:56:02.092343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.047 [2024-07-15 11:56:02.092361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.047 [2024-07-15 11:56:02.092370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.047 [2024-07-15 11:56:02.092379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.047 [2024-07-15 11:56:02.092395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.047 qpair failed and we were unable to recover it. 00:29:34.047 [2024-07-15 11:56:02.102276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.047 [2024-07-15 11:56:02.102355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.047 [2024-07-15 11:56:02.102373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.047 [2024-07-15 11:56:02.102383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.047 [2024-07-15 11:56:02.102392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.047 [2024-07-15 11:56:02.102409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.047 qpair failed and we were unable to recover it. 00:29:34.047 [2024-07-15 11:56:02.112312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.047 [2024-07-15 11:56:02.112390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.047 [2024-07-15 11:56:02.112409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.047 [2024-07-15 11:56:02.112421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.047 [2024-07-15 11:56:02.112430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.047 [2024-07-15 11:56:02.112446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.047 qpair failed and we were unable to recover it. 00:29:34.047 [2024-07-15 11:56:02.122338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.047 [2024-07-15 11:56:02.122419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.047 [2024-07-15 11:56:02.122437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.047 [2024-07-15 11:56:02.122446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.047 [2024-07-15 11:56:02.122455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.047 [2024-07-15 11:56:02.122472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.047 qpair failed and we were unable to recover it. 00:29:34.047 [2024-07-15 11:56:02.132304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.047 [2024-07-15 11:56:02.132388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.047 [2024-07-15 11:56:02.132406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.047 [2024-07-15 11:56:02.132415] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.047 [2024-07-15 11:56:02.132424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.047 [2024-07-15 11:56:02.132441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.047 qpair failed and we were unable to recover it. 00:29:34.048 [2024-07-15 11:56:02.142389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.048 [2024-07-15 11:56:02.142469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.048 [2024-07-15 11:56:02.142487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.048 [2024-07-15 11:56:02.142497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.048 [2024-07-15 11:56:02.142506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.048 [2024-07-15 11:56:02.142523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.048 qpair failed and we were unable to recover it. 00:29:34.307 [2024-07-15 11:56:02.152345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.307 [2024-07-15 11:56:02.152424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.307 [2024-07-15 11:56:02.152442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.307 [2024-07-15 11:56:02.152452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.307 [2024-07-15 11:56:02.152460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.307 [2024-07-15 11:56:02.152478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.307 qpair failed and we were unable to recover it. 00:29:34.307 [2024-07-15 11:56:02.162472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.307 [2024-07-15 11:56:02.162578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.307 [2024-07-15 11:56:02.162599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.307 [2024-07-15 11:56:02.162609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.307 [2024-07-15 11:56:02.162618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.307 [2024-07-15 11:56:02.162635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.307 qpair failed and we were unable to recover it. 00:29:34.307 [2024-07-15 11:56:02.172474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.307 [2024-07-15 11:56:02.172556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.307 [2024-07-15 11:56:02.172574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.307 [2024-07-15 11:56:02.172584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.307 [2024-07-15 11:56:02.172592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.307 [2024-07-15 11:56:02.172609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.307 qpair failed and we were unable to recover it. 00:29:34.307 [2024-07-15 11:56:02.182429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.307 [2024-07-15 11:56:02.182525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.307 [2024-07-15 11:56:02.182544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.307 [2024-07-15 11:56:02.182554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.307 [2024-07-15 11:56:02.182563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.307 [2024-07-15 11:56:02.182581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.307 qpair failed and we were unable to recover it. 00:29:34.307 [2024-07-15 11:56:02.192488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.307 [2024-07-15 11:56:02.192581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.307 [2024-07-15 11:56:02.192598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.307 [2024-07-15 11:56:02.192608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.307 [2024-07-15 11:56:02.192616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.307 [2024-07-15 11:56:02.192633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.307 qpair failed and we were unable to recover it. 00:29:34.307 [2024-07-15 11:56:02.202569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.307 [2024-07-15 11:56:02.202679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.307 [2024-07-15 11:56:02.202698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.307 [2024-07-15 11:56:02.202711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.307 [2024-07-15 11:56:02.202720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.307 [2024-07-15 11:56:02.202739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.307 qpair failed and we were unable to recover it. 00:29:34.307 [2024-07-15 11:56:02.212626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.307 [2024-07-15 11:56:02.212709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.307 [2024-07-15 11:56:02.212728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.307 [2024-07-15 11:56:02.212738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.307 [2024-07-15 11:56:02.212746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.307 [2024-07-15 11:56:02.212764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.307 qpair failed and we were unable to recover it. 00:29:34.307 [2024-07-15 11:56:02.222597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.307 [2024-07-15 11:56:02.222682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.307 [2024-07-15 11:56:02.222700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.307 [2024-07-15 11:56:02.222710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.307 [2024-07-15 11:56:02.222719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.307 [2024-07-15 11:56:02.222736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.307 qpair failed and we were unable to recover it. 00:29:34.307 [2024-07-15 11:56:02.232610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.307 [2024-07-15 11:56:02.232688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.307 [2024-07-15 11:56:02.232706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.307 [2024-07-15 11:56:02.232716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.307 [2024-07-15 11:56:02.232725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.307 [2024-07-15 11:56:02.232742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.307 qpair failed and we were unable to recover it. 00:29:34.307 [2024-07-15 11:56:02.242703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.307 [2024-07-15 11:56:02.242781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.307 [2024-07-15 11:56:02.242799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.307 [2024-07-15 11:56:02.242808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.307 [2024-07-15 11:56:02.242817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.307 [2024-07-15 11:56:02.242838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.307 qpair failed and we were unable to recover it. 00:29:34.307 [2024-07-15 11:56:02.252700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.307 [2024-07-15 11:56:02.252781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.307 [2024-07-15 11:56:02.252799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.307 [2024-07-15 11:56:02.252809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.307 [2024-07-15 11:56:02.252818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.307 [2024-07-15 11:56:02.252841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.307 qpair failed and we were unable to recover it. 00:29:34.307 [2024-07-15 11:56:02.262733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.308 [2024-07-15 11:56:02.262846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.308 [2024-07-15 11:56:02.262866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.308 [2024-07-15 11:56:02.262875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.308 [2024-07-15 11:56:02.262885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.308 [2024-07-15 11:56:02.262902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.308 qpair failed and we were unable to recover it. 00:29:34.308 [2024-07-15 11:56:02.272722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.308 [2024-07-15 11:56:02.272801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.308 [2024-07-15 11:56:02.272819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.308 [2024-07-15 11:56:02.272829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.308 [2024-07-15 11:56:02.272842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.308 [2024-07-15 11:56:02.272859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.308 qpair failed and we were unable to recover it. 00:29:34.308 [2024-07-15 11:56:02.282810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.308 [2024-07-15 11:56:02.282942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.308 [2024-07-15 11:56:02.282961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.308 [2024-07-15 11:56:02.282971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.308 [2024-07-15 11:56:02.282980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.308 [2024-07-15 11:56:02.282997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.308 qpair failed and we were unable to recover it. 00:29:34.308 [2024-07-15 11:56:02.292831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.308 [2024-07-15 11:56:02.292925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.308 [2024-07-15 11:56:02.292942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.308 [2024-07-15 11:56:02.292955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.308 [2024-07-15 11:56:02.292963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.308 [2024-07-15 11:56:02.292981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.308 qpair failed and we were unable to recover it. 00:29:34.308 [2024-07-15 11:56:02.302744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.308 [2024-07-15 11:56:02.302826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.308 [2024-07-15 11:56:02.302851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.308 [2024-07-15 11:56:02.302861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.308 [2024-07-15 11:56:02.302870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.308 [2024-07-15 11:56:02.302888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.308 qpair failed and we were unable to recover it. 00:29:34.308 [2024-07-15 11:56:02.312764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.308 [2024-07-15 11:56:02.312846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.308 [2024-07-15 11:56:02.312864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.308 [2024-07-15 11:56:02.312874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.308 [2024-07-15 11:56:02.312883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.308 [2024-07-15 11:56:02.312900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.308 qpair failed and we were unable to recover it. 00:29:34.308 [2024-07-15 11:56:02.322890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.308 [2024-07-15 11:56:02.322969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.308 [2024-07-15 11:56:02.322987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.308 [2024-07-15 11:56:02.322998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.308 [2024-07-15 11:56:02.323006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.308 [2024-07-15 11:56:02.323023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.308 qpair failed and we were unable to recover it. 00:29:34.308 [2024-07-15 11:56:02.332901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.308 [2024-07-15 11:56:02.333026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.308 [2024-07-15 11:56:02.333045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.308 [2024-07-15 11:56:02.333055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.308 [2024-07-15 11:56:02.333064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.308 [2024-07-15 11:56:02.333082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.308 qpair failed and we were unable to recover it. 00:29:34.308 [2024-07-15 11:56:02.342915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.308 [2024-07-15 11:56:02.342992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.308 [2024-07-15 11:56:02.343010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.308 [2024-07-15 11:56:02.343020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.308 [2024-07-15 11:56:02.343028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.308 [2024-07-15 11:56:02.343045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.308 qpair failed and we were unable to recover it. 00:29:34.308 [2024-07-15 11:56:02.352937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.308 [2024-07-15 11:56:02.353045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.308 [2024-07-15 11:56:02.353063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.308 [2024-07-15 11:56:02.353074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.308 [2024-07-15 11:56:02.353083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.308 [2024-07-15 11:56:02.353101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.308 qpair failed and we were unable to recover it. 00:29:34.308 [2024-07-15 11:56:02.362996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.308 [2024-07-15 11:56:02.363077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.308 [2024-07-15 11:56:02.363095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.308 [2024-07-15 11:56:02.363105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.308 [2024-07-15 11:56:02.363114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.308 [2024-07-15 11:56:02.363130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.308 qpair failed and we were unable to recover it. 00:29:34.308 [2024-07-15 11:56:02.373000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.308 [2024-07-15 11:56:02.373089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.308 [2024-07-15 11:56:02.373107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.308 [2024-07-15 11:56:02.373117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.308 [2024-07-15 11:56:02.373125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.308 [2024-07-15 11:56:02.373142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.308 qpair failed and we were unable to recover it. 00:29:34.308 [2024-07-15 11:56:02.383042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.308 [2024-07-15 11:56:02.383126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.308 [2024-07-15 11:56:02.383149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.308 [2024-07-15 11:56:02.383159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.308 [2024-07-15 11:56:02.383168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.308 [2024-07-15 11:56:02.383185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.308 qpair failed and we were unable to recover it. 00:29:34.308 [2024-07-15 11:56:02.393143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.308 [2024-07-15 11:56:02.393227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.308 [2024-07-15 11:56:02.393244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.308 [2024-07-15 11:56:02.393254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.308 [2024-07-15 11:56:02.393262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.308 [2024-07-15 11:56:02.393279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.308 qpair failed and we were unable to recover it. 00:29:34.308 [2024-07-15 11:56:02.403111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.308 [2024-07-15 11:56:02.403228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.308 [2024-07-15 11:56:02.403249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.308 [2024-07-15 11:56:02.403258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.308 [2024-07-15 11:56:02.403267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.308 [2024-07-15 11:56:02.403285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.308 qpair failed and we were unable to recover it. 00:29:34.566 [2024-07-15 11:56:02.413103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.566 [2024-07-15 11:56:02.413182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.566 [2024-07-15 11:56:02.413200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.566 [2024-07-15 11:56:02.413210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.566 [2024-07-15 11:56:02.413219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.566 [2024-07-15 11:56:02.413236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.566 qpair failed and we were unable to recover it. 00:29:34.566 [2024-07-15 11:56:02.423143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.566 [2024-07-15 11:56:02.423305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.566 [2024-07-15 11:56:02.423324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.566 [2024-07-15 11:56:02.423334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.566 [2024-07-15 11:56:02.423343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.566 [2024-07-15 11:56:02.423364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.566 qpair failed and we were unable to recover it. 00:29:34.566 [2024-07-15 11:56:02.433175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.566 [2024-07-15 11:56:02.433254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.566 [2024-07-15 11:56:02.433273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.566 [2024-07-15 11:56:02.433282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.566 [2024-07-15 11:56:02.433291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.566 [2024-07-15 11:56:02.433308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.566 qpair failed and we were unable to recover it. 00:29:34.566 [2024-07-15 11:56:02.443195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.566 [2024-07-15 11:56:02.443280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.566 [2024-07-15 11:56:02.443299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.566 [2024-07-15 11:56:02.443309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.566 [2024-07-15 11:56:02.443318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.566 [2024-07-15 11:56:02.443335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.566 qpair failed and we were unable to recover it. 00:29:34.566 [2024-07-15 11:56:02.453251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.566 [2024-07-15 11:56:02.453330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.566 [2024-07-15 11:56:02.453348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.566 [2024-07-15 11:56:02.453358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.566 [2024-07-15 11:56:02.453366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.566 [2024-07-15 11:56:02.453383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.566 qpair failed and we were unable to recover it. 00:29:34.566 [2024-07-15 11:56:02.463252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.566 [2024-07-15 11:56:02.463330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.566 [2024-07-15 11:56:02.463348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.566 [2024-07-15 11:56:02.463358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.566 [2024-07-15 11:56:02.463366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.566 [2024-07-15 11:56:02.463383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.566 qpair failed and we were unable to recover it. 00:29:34.566 [2024-07-15 11:56:02.473286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.566 [2024-07-15 11:56:02.473369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.566 [2024-07-15 11:56:02.473391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.566 [2024-07-15 11:56:02.473402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.566 [2024-07-15 11:56:02.473411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.566 [2024-07-15 11:56:02.473428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.566 qpair failed and we were unable to recover it. 00:29:34.566 [2024-07-15 11:56:02.483333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.566 [2024-07-15 11:56:02.483413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.566 [2024-07-15 11:56:02.483431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.566 [2024-07-15 11:56:02.483440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.566 [2024-07-15 11:56:02.483449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.566 [2024-07-15 11:56:02.483466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.566 qpair failed and we were unable to recover it. 00:29:34.566 [2024-07-15 11:56:02.493331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.566 [2024-07-15 11:56:02.493414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.566 [2024-07-15 11:56:02.493431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.566 [2024-07-15 11:56:02.493441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.566 [2024-07-15 11:56:02.493450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.566 [2024-07-15 11:56:02.493468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.566 qpair failed and we were unable to recover it. 00:29:34.566 [2024-07-15 11:56:02.503353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.566 [2024-07-15 11:56:02.503453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.566 [2024-07-15 11:56:02.503471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.566 [2024-07-15 11:56:02.503481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.566 [2024-07-15 11:56:02.503490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.566 [2024-07-15 11:56:02.503508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.566 qpair failed and we were unable to recover it. 00:29:34.566 [2024-07-15 11:56:02.513406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.567 [2024-07-15 11:56:02.513529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.567 [2024-07-15 11:56:02.513548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.567 [2024-07-15 11:56:02.513558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.567 [2024-07-15 11:56:02.513568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.567 [2024-07-15 11:56:02.513588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.567 qpair failed and we were unable to recover it. 00:29:34.567 [2024-07-15 11:56:02.523379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.567 [2024-07-15 11:56:02.523477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.567 [2024-07-15 11:56:02.523495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.567 [2024-07-15 11:56:02.523504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.567 [2024-07-15 11:56:02.523513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.567 [2024-07-15 11:56:02.523530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.567 qpair failed and we were unable to recover it. 00:29:34.567 [2024-07-15 11:56:02.533481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.567 [2024-07-15 11:56:02.533564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.567 [2024-07-15 11:56:02.533582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.567 [2024-07-15 11:56:02.533592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.567 [2024-07-15 11:56:02.533601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.567 [2024-07-15 11:56:02.533619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.567 qpair failed and we were unable to recover it. 00:29:34.567 [2024-07-15 11:56:02.543468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.567 [2024-07-15 11:56:02.543575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.567 [2024-07-15 11:56:02.543593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.567 [2024-07-15 11:56:02.543603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.567 [2024-07-15 11:56:02.543612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.567 [2024-07-15 11:56:02.543629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.567 qpair failed and we were unable to recover it. 00:29:34.567 [2024-07-15 11:56:02.553503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.567 [2024-07-15 11:56:02.553582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.567 [2024-07-15 11:56:02.553600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.567 [2024-07-15 11:56:02.553609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.567 [2024-07-15 11:56:02.553618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.567 [2024-07-15 11:56:02.553636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.567 qpair failed and we were unable to recover it. 00:29:34.567 [2024-07-15 11:56:02.563540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.567 [2024-07-15 11:56:02.563616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.567 [2024-07-15 11:56:02.563637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.567 [2024-07-15 11:56:02.563647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.567 [2024-07-15 11:56:02.563656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.567 [2024-07-15 11:56:02.563673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.567 qpair failed and we were unable to recover it. 00:29:34.567 [2024-07-15 11:56:02.573516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.567 [2024-07-15 11:56:02.573599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.567 [2024-07-15 11:56:02.573617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.567 [2024-07-15 11:56:02.573627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.567 [2024-07-15 11:56:02.573636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.567 [2024-07-15 11:56:02.573652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.567 qpair failed and we were unable to recover it. 00:29:34.567 [2024-07-15 11:56:02.583656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.567 [2024-07-15 11:56:02.583738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.567 [2024-07-15 11:56:02.583758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.567 [2024-07-15 11:56:02.583768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.567 [2024-07-15 11:56:02.583777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.567 [2024-07-15 11:56:02.583795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.567 qpair failed and we were unable to recover it. 00:29:34.567 [2024-07-15 11:56:02.593680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.567 [2024-07-15 11:56:02.593761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.567 [2024-07-15 11:56:02.593780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.567 [2024-07-15 11:56:02.593789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.567 [2024-07-15 11:56:02.593798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.567 [2024-07-15 11:56:02.593815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.567 qpair failed and we were unable to recover it. 00:29:34.567 [2024-07-15 11:56:02.603679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.567 [2024-07-15 11:56:02.603758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.567 [2024-07-15 11:56:02.603778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.567 [2024-07-15 11:56:02.603788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.567 [2024-07-15 11:56:02.603797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.567 [2024-07-15 11:56:02.603818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.567 qpair failed and we were unable to recover it. 00:29:34.567 [2024-07-15 11:56:02.613636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.567 [2024-07-15 11:56:02.613718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.567 [2024-07-15 11:56:02.613736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.567 [2024-07-15 11:56:02.613746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.567 [2024-07-15 11:56:02.613755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.567 [2024-07-15 11:56:02.613772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.567 qpair failed and we were unable to recover it. 00:29:34.567 [2024-07-15 11:56:02.623698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.567 [2024-07-15 11:56:02.623781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.567 [2024-07-15 11:56:02.623800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.567 [2024-07-15 11:56:02.623810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.567 [2024-07-15 11:56:02.623819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.567 [2024-07-15 11:56:02.623840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.567 qpair failed and we were unable to recover it. 00:29:34.567 [2024-07-15 11:56:02.633754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.567 [2024-07-15 11:56:02.633841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.567 [2024-07-15 11:56:02.633859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.567 [2024-07-15 11:56:02.633869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.567 [2024-07-15 11:56:02.633877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.567 [2024-07-15 11:56:02.633894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.567 qpair failed and we were unable to recover it. 00:29:34.567 [2024-07-15 11:56:02.643774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.567 [2024-07-15 11:56:02.643856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.567 [2024-07-15 11:56:02.643875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.567 [2024-07-15 11:56:02.643884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.567 [2024-07-15 11:56:02.643893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.567 [2024-07-15 11:56:02.643909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.567 qpair failed and we were unable to recover it. 00:29:34.567 [2024-07-15 11:56:02.653826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.567 [2024-07-15 11:56:02.653913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.567 [2024-07-15 11:56:02.653934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.567 [2024-07-15 11:56:02.653944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.567 [2024-07-15 11:56:02.653952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.567 [2024-07-15 11:56:02.653969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.567 qpair failed and we were unable to recover it. 00:29:34.567 [2024-07-15 11:56:02.663857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.567 [2024-07-15 11:56:02.663938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.567 [2024-07-15 11:56:02.663956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.568 [2024-07-15 11:56:02.663965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.568 [2024-07-15 11:56:02.663974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.568 [2024-07-15 11:56:02.663990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.568 qpair failed and we were unable to recover it. 00:29:34.826 [2024-07-15 11:56:02.673850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.826 [2024-07-15 11:56:02.673944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.826 [2024-07-15 11:56:02.673961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.826 [2024-07-15 11:56:02.673971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.826 [2024-07-15 11:56:02.673979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.826 [2024-07-15 11:56:02.673996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.826 qpair failed and we were unable to recover it. 00:29:34.826 [2024-07-15 11:56:02.683890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.826 [2024-07-15 11:56:02.683974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.826 [2024-07-15 11:56:02.683994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.826 [2024-07-15 11:56:02.684004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.826 [2024-07-15 11:56:02.684013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.826 [2024-07-15 11:56:02.684030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.826 qpair failed and we were unable to recover it. 00:29:34.826 [2024-07-15 11:56:02.693980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.826 [2024-07-15 11:56:02.694066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.827 [2024-07-15 11:56:02.694084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.827 [2024-07-15 11:56:02.694093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.827 [2024-07-15 11:56:02.694105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.827 [2024-07-15 11:56:02.694122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.827 qpair failed and we were unable to recover it. 00:29:34.827 [2024-07-15 11:56:02.703890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.827 [2024-07-15 11:56:02.703999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.827 [2024-07-15 11:56:02.704018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.827 [2024-07-15 11:56:02.704029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.827 [2024-07-15 11:56:02.704038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.827 [2024-07-15 11:56:02.704056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.827 qpair failed and we were unable to recover it. 00:29:34.827 [2024-07-15 11:56:02.714002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.827 [2024-07-15 11:56:02.714107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.827 [2024-07-15 11:56:02.714125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.827 [2024-07-15 11:56:02.714135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.827 [2024-07-15 11:56:02.714144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.827 [2024-07-15 11:56:02.714162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.827 qpair failed and we were unable to recover it. 00:29:34.827 [2024-07-15 11:56:02.724003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.827 [2024-07-15 11:56:02.724084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.827 [2024-07-15 11:56:02.724103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.827 [2024-07-15 11:56:02.724112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.827 [2024-07-15 11:56:02.724121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.827 [2024-07-15 11:56:02.724139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.827 qpair failed and we were unable to recover it. 00:29:34.827 [2024-07-15 11:56:02.734109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.827 [2024-07-15 11:56:02.734220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.827 [2024-07-15 11:56:02.734239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.827 [2024-07-15 11:56:02.734250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.827 [2024-07-15 11:56:02.734258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.827 [2024-07-15 11:56:02.734276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.827 qpair failed and we were unable to recover it. 00:29:34.827 [2024-07-15 11:56:02.744111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.827 [2024-07-15 11:56:02.744220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.827 [2024-07-15 11:56:02.744238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.827 [2024-07-15 11:56:02.744248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.827 [2024-07-15 11:56:02.744257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.827 [2024-07-15 11:56:02.744273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.827 qpair failed and we were unable to recover it. 00:29:34.827 [2024-07-15 11:56:02.754135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.827 [2024-07-15 11:56:02.754245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.827 [2024-07-15 11:56:02.754262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.827 [2024-07-15 11:56:02.754272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.827 [2024-07-15 11:56:02.754281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.827 [2024-07-15 11:56:02.754298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.827 qpair failed and we were unable to recover it. 00:29:34.827 [2024-07-15 11:56:02.764148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.827 [2024-07-15 11:56:02.764230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.827 [2024-07-15 11:56:02.764250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.827 [2024-07-15 11:56:02.764260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.827 [2024-07-15 11:56:02.764269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.827 [2024-07-15 11:56:02.764287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.827 qpair failed and we were unable to recover it. 00:29:34.827 [2024-07-15 11:56:02.774121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.827 [2024-07-15 11:56:02.774201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.827 [2024-07-15 11:56:02.774219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.827 [2024-07-15 11:56:02.774229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.827 [2024-07-15 11:56:02.774237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.827 [2024-07-15 11:56:02.774254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.827 qpair failed and we were unable to recover it. 00:29:34.827 [2024-07-15 11:56:02.784196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.827 [2024-07-15 11:56:02.784280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.827 [2024-07-15 11:56:02.784297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.827 [2024-07-15 11:56:02.784307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.827 [2024-07-15 11:56:02.784319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.827 [2024-07-15 11:56:02.784336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.827 qpair failed and we were unable to recover it. 00:29:34.827 [2024-07-15 11:56:02.794268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.827 [2024-07-15 11:56:02.794379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.828 [2024-07-15 11:56:02.794397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.828 [2024-07-15 11:56:02.794407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.828 [2024-07-15 11:56:02.794416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.828 [2024-07-15 11:56:02.794434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.828 qpair failed and we were unable to recover it. 00:29:34.828 [2024-07-15 11:56:02.804261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.828 [2024-07-15 11:56:02.804342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.828 [2024-07-15 11:56:02.804361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.828 [2024-07-15 11:56:02.804370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.828 [2024-07-15 11:56:02.804379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.828 [2024-07-15 11:56:02.804396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.828 qpair failed and we were unable to recover it. 00:29:34.828 [2024-07-15 11:56:02.814283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.828 [2024-07-15 11:56:02.814366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.828 [2024-07-15 11:56:02.814383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.828 [2024-07-15 11:56:02.814393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.828 [2024-07-15 11:56:02.814402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.828 [2024-07-15 11:56:02.814419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.828 qpair failed and we were unable to recover it. 00:29:34.828 [2024-07-15 11:56:02.824303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.828 [2024-07-15 11:56:02.824384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.828 [2024-07-15 11:56:02.824401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.828 [2024-07-15 11:56:02.824411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.828 [2024-07-15 11:56:02.824420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.828 [2024-07-15 11:56:02.824437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.828 qpair failed and we were unable to recover it. 00:29:34.828 [2024-07-15 11:56:02.834318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.828 [2024-07-15 11:56:02.834404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.828 [2024-07-15 11:56:02.834422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.828 [2024-07-15 11:56:02.834431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.828 [2024-07-15 11:56:02.834440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.828 [2024-07-15 11:56:02.834457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.828 qpair failed and we were unable to recover it. 00:29:34.828 [2024-07-15 11:56:02.844383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.828 [2024-07-15 11:56:02.844491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.828 [2024-07-15 11:56:02.844509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.828 [2024-07-15 11:56:02.844519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.828 [2024-07-15 11:56:02.844528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.828 [2024-07-15 11:56:02.844545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.828 qpair failed and we were unable to recover it. 00:29:34.828 [2024-07-15 11:56:02.854452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.828 [2024-07-15 11:56:02.854556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.828 [2024-07-15 11:56:02.854573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.828 [2024-07-15 11:56:02.854583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.828 [2024-07-15 11:56:02.854591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.828 [2024-07-15 11:56:02.854609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.828 qpair failed and we were unable to recover it. 00:29:34.828 [2024-07-15 11:56:02.864466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.828 [2024-07-15 11:56:02.864549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.828 [2024-07-15 11:56:02.864567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.828 [2024-07-15 11:56:02.864576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.828 [2024-07-15 11:56:02.864585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.828 [2024-07-15 11:56:02.864602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.828 qpair failed and we were unable to recover it. 00:29:34.828 [2024-07-15 11:56:02.874459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.828 [2024-07-15 11:56:02.874553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.828 [2024-07-15 11:56:02.874572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.828 [2024-07-15 11:56:02.874581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.828 [2024-07-15 11:56:02.874593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.828 [2024-07-15 11:56:02.874609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.828 qpair failed and we were unable to recover it. 00:29:34.828 [2024-07-15 11:56:02.884497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.828 [2024-07-15 11:56:02.884580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.828 [2024-07-15 11:56:02.884598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.828 [2024-07-15 11:56:02.884607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.828 [2024-07-15 11:56:02.884616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.828 [2024-07-15 11:56:02.884633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.828 qpair failed and we were unable to recover it. 00:29:34.828 [2024-07-15 11:56:02.894533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.828 [2024-07-15 11:56:02.894615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.829 [2024-07-15 11:56:02.894634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.829 [2024-07-15 11:56:02.894643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.829 [2024-07-15 11:56:02.894652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.829 [2024-07-15 11:56:02.894669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.829 qpair failed and we were unable to recover it. 00:29:34.829 [2024-07-15 11:56:02.904517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.829 [2024-07-15 11:56:02.904619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.829 [2024-07-15 11:56:02.904637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.829 [2024-07-15 11:56:02.904647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.829 [2024-07-15 11:56:02.904656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.829 [2024-07-15 11:56:02.904673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.829 qpair failed and we were unable to recover it. 00:29:34.829 [2024-07-15 11:56:02.914562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.829 [2024-07-15 11:56:02.914643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.829 [2024-07-15 11:56:02.914661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.829 [2024-07-15 11:56:02.914671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.829 [2024-07-15 11:56:02.914679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.829 [2024-07-15 11:56:02.914697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.829 qpair failed and we were unable to recover it. 00:29:34.829 [2024-07-15 11:56:02.924582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.829 [2024-07-15 11:56:02.924666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.829 [2024-07-15 11:56:02.924684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.829 [2024-07-15 11:56:02.924694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.829 [2024-07-15 11:56:02.924703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:34.829 [2024-07-15 11:56:02.924720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:34.829 qpair failed and we were unable to recover it. 00:29:35.088 [2024-07-15 11:56:02.934673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.088 [2024-07-15 11:56:02.934788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.088 [2024-07-15 11:56:02.934805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.088 [2024-07-15 11:56:02.934815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.088 [2024-07-15 11:56:02.934824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.088 [2024-07-15 11:56:02.934846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.088 qpair failed and we were unable to recover it. 00:29:35.088 [2024-07-15 11:56:02.944659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.088 [2024-07-15 11:56:02.944768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.088 [2024-07-15 11:56:02.944786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.088 [2024-07-15 11:56:02.944796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.088 [2024-07-15 11:56:02.944806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.088 [2024-07-15 11:56:02.944823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.088 qpair failed and we were unable to recover it. 00:29:35.088 [2024-07-15 11:56:02.954685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.088 [2024-07-15 11:56:02.954769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.088 [2024-07-15 11:56:02.954786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.088 [2024-07-15 11:56:02.954796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.088 [2024-07-15 11:56:02.954805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.088 [2024-07-15 11:56:02.954822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.088 qpair failed and we were unable to recover it. 00:29:35.088 [2024-07-15 11:56:02.964747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.088 [2024-07-15 11:56:02.964850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.088 [2024-07-15 11:56:02.964868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.088 [2024-07-15 11:56:02.964880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.088 [2024-07-15 11:56:02.964889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.088 [2024-07-15 11:56:02.964906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.088 qpair failed and we were unable to recover it. 00:29:35.088 [2024-07-15 11:56:02.974736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.088 [2024-07-15 11:56:02.974818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.089 [2024-07-15 11:56:02.974841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.089 [2024-07-15 11:56:02.974852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.089 [2024-07-15 11:56:02.974861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.089 [2024-07-15 11:56:02.974878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.089 qpair failed and we were unable to recover it. 00:29:35.089 [2024-07-15 11:56:02.984795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.089 [2024-07-15 11:56:02.984905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.089 [2024-07-15 11:56:02.984924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.089 [2024-07-15 11:56:02.984933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.089 [2024-07-15 11:56:02.984942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.089 [2024-07-15 11:56:02.984959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.089 qpair failed and we were unable to recover it. 00:29:35.089 [2024-07-15 11:56:02.994794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.089 [2024-07-15 11:56:02.994879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.089 [2024-07-15 11:56:02.994897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.089 [2024-07-15 11:56:02.994907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.089 [2024-07-15 11:56:02.994916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.089 [2024-07-15 11:56:02.994933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.089 qpair failed and we were unable to recover it. 00:29:35.089 [2024-07-15 11:56:03.004846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.089 [2024-07-15 11:56:03.004959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.089 [2024-07-15 11:56:03.004978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.089 [2024-07-15 11:56:03.004988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.089 [2024-07-15 11:56:03.004997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.089 [2024-07-15 11:56:03.005014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.089 qpair failed and we were unable to recover it. 00:29:35.089 [2024-07-15 11:56:03.014874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.089 [2024-07-15 11:56:03.014970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.089 [2024-07-15 11:56:03.014996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.089 [2024-07-15 11:56:03.015009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.089 [2024-07-15 11:56:03.015020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.089 [2024-07-15 11:56:03.015045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.089 qpair failed and we were unable to recover it. 00:29:35.089 [2024-07-15 11:56:03.024888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.089 [2024-07-15 11:56:03.024976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.089 [2024-07-15 11:56:03.024995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.089 [2024-07-15 11:56:03.025005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.089 [2024-07-15 11:56:03.025014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.089 [2024-07-15 11:56:03.025033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.089 qpair failed and we were unable to recover it. 00:29:35.089 [2024-07-15 11:56:03.034899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.089 [2024-07-15 11:56:03.034984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.089 [2024-07-15 11:56:03.035002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.089 [2024-07-15 11:56:03.035012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.089 [2024-07-15 11:56:03.035021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.089 [2024-07-15 11:56:03.035040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.089 qpair failed and we were unable to recover it. 00:29:35.089 [2024-07-15 11:56:03.044934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.089 [2024-07-15 11:56:03.045014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.089 [2024-07-15 11:56:03.045032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.089 [2024-07-15 11:56:03.045042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.089 [2024-07-15 11:56:03.045051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.089 [2024-07-15 11:56:03.045070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.089 qpair failed and we were unable to recover it. 00:29:35.089 [2024-07-15 11:56:03.055008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.089 [2024-07-15 11:56:03.055090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.089 [2024-07-15 11:56:03.055108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.089 [2024-07-15 11:56:03.055120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.089 [2024-07-15 11:56:03.055129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.089 [2024-07-15 11:56:03.055148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.089 qpair failed and we were unable to recover it. 00:29:35.089 [2024-07-15 11:56:03.065001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.089 [2024-07-15 11:56:03.065085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.089 [2024-07-15 11:56:03.065103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.089 [2024-07-15 11:56:03.065113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.089 [2024-07-15 11:56:03.065121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.089 [2024-07-15 11:56:03.065141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.089 qpair failed and we were unable to recover it. 00:29:35.089 [2024-07-15 11:56:03.075028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.089 [2024-07-15 11:56:03.075111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.089 [2024-07-15 11:56:03.075129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.089 [2024-07-15 11:56:03.075139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.089 [2024-07-15 11:56:03.075148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.089 [2024-07-15 11:56:03.075167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.089 qpair failed and we were unable to recover it. 00:29:35.089 [2024-07-15 11:56:03.085053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.089 [2024-07-15 11:56:03.085138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.089 [2024-07-15 11:56:03.085157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.089 [2024-07-15 11:56:03.085167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.089 [2024-07-15 11:56:03.085175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.089 [2024-07-15 11:56:03.085194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.089 qpair failed and we were unable to recover it. 00:29:35.089 [2024-07-15 11:56:03.095089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.089 [2024-07-15 11:56:03.095171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.089 [2024-07-15 11:56:03.095188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.089 [2024-07-15 11:56:03.095198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.089 [2024-07-15 11:56:03.095207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.089 [2024-07-15 11:56:03.095224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.089 qpair failed and we were unable to recover it. 00:29:35.089 [2024-07-15 11:56:03.105079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.089 [2024-07-15 11:56:03.105160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.089 [2024-07-15 11:56:03.105178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.089 [2024-07-15 11:56:03.105188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.089 [2024-07-15 11:56:03.105196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.089 [2024-07-15 11:56:03.105214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.089 qpair failed and we were unable to recover it. 00:29:35.089 [2024-07-15 11:56:03.115139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.089 [2024-07-15 11:56:03.115229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.089 [2024-07-15 11:56:03.115248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.089 [2024-07-15 11:56:03.115257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.089 [2024-07-15 11:56:03.115266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.089 [2024-07-15 11:56:03.115284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.089 qpair failed and we were unable to recover it. 00:29:35.089 [2024-07-15 11:56:03.125161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.089 [2024-07-15 11:56:03.125242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.089 [2024-07-15 11:56:03.125260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.089 [2024-07-15 11:56:03.125270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.090 [2024-07-15 11:56:03.125279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.090 [2024-07-15 11:56:03.125297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.090 qpair failed and we were unable to recover it. 00:29:35.090 [2024-07-15 11:56:03.135230] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.090 [2024-07-15 11:56:03.135336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.090 [2024-07-15 11:56:03.135354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.090 [2024-07-15 11:56:03.135364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.090 [2024-07-15 11:56:03.135373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.090 [2024-07-15 11:56:03.135392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.090 qpair failed and we were unable to recover it. 00:29:35.090 [2024-07-15 11:56:03.145265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.090 [2024-07-15 11:56:03.145374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.090 [2024-07-15 11:56:03.145396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.090 [2024-07-15 11:56:03.145406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.090 [2024-07-15 11:56:03.145416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.090 [2024-07-15 11:56:03.145435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.090 qpair failed and we were unable to recover it. 00:29:35.090 [2024-07-15 11:56:03.155246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.090 [2024-07-15 11:56:03.155326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.090 [2024-07-15 11:56:03.155344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.090 [2024-07-15 11:56:03.155354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.090 [2024-07-15 11:56:03.155362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.090 [2024-07-15 11:56:03.155381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.090 qpair failed and we were unable to recover it. 00:29:35.090 [2024-07-15 11:56:03.165274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.090 [2024-07-15 11:56:03.165354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.090 [2024-07-15 11:56:03.165372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.090 [2024-07-15 11:56:03.165382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.090 [2024-07-15 11:56:03.165391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.090 [2024-07-15 11:56:03.165408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.090 qpair failed and we were unable to recover it. 00:29:35.090 [2024-07-15 11:56:03.175332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.090 [2024-07-15 11:56:03.175444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.090 [2024-07-15 11:56:03.175463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.090 [2024-07-15 11:56:03.175474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.090 [2024-07-15 11:56:03.175483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.090 [2024-07-15 11:56:03.175502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.090 qpair failed and we were unable to recover it. 00:29:35.090 [2024-07-15 11:56:03.185335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.090 [2024-07-15 11:56:03.185420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.090 [2024-07-15 11:56:03.185438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.090 [2024-07-15 11:56:03.185447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.090 [2024-07-15 11:56:03.185456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.090 [2024-07-15 11:56:03.185477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.090 qpair failed and we were unable to recover it. 00:29:35.349 [2024-07-15 11:56:03.195364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.349 [2024-07-15 11:56:03.195445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.349 [2024-07-15 11:56:03.195462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.349 [2024-07-15 11:56:03.195472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.349 [2024-07-15 11:56:03.195481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.349 [2024-07-15 11:56:03.195499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.349 qpair failed and we were unable to recover it. 00:29:35.349 [2024-07-15 11:56:03.205396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.349 [2024-07-15 11:56:03.205526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.349 [2024-07-15 11:56:03.205543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.349 [2024-07-15 11:56:03.205553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.349 [2024-07-15 11:56:03.205562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.349 [2024-07-15 11:56:03.205580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.349 qpair failed and we were unable to recover it. 00:29:35.349 [2024-07-15 11:56:03.215416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.349 [2024-07-15 11:56:03.215500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.349 [2024-07-15 11:56:03.215517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.349 [2024-07-15 11:56:03.215527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.349 [2024-07-15 11:56:03.215535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.349 [2024-07-15 11:56:03.215554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.349 qpair failed and we were unable to recover it. 00:29:35.349 [2024-07-15 11:56:03.225442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.349 [2024-07-15 11:56:03.225520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.349 [2024-07-15 11:56:03.225538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.349 [2024-07-15 11:56:03.225547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.349 [2024-07-15 11:56:03.225556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.349 [2024-07-15 11:56:03.225574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.349 qpair failed and we were unable to recover it. 00:29:35.349 [2024-07-15 11:56:03.235499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.349 [2024-07-15 11:56:03.235613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.349 [2024-07-15 11:56:03.235633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.349 [2024-07-15 11:56:03.235643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.349 [2024-07-15 11:56:03.235652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.349 [2024-07-15 11:56:03.235670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.349 qpair failed and we were unable to recover it. 00:29:35.349 [2024-07-15 11:56:03.245542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.349 [2024-07-15 11:56:03.245659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.349 [2024-07-15 11:56:03.245677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.349 [2024-07-15 11:56:03.245686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.349 [2024-07-15 11:56:03.245695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.349 [2024-07-15 11:56:03.245714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.349 qpair failed and we were unable to recover it. 00:29:35.349 [2024-07-15 11:56:03.255512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.349 [2024-07-15 11:56:03.255597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.349 [2024-07-15 11:56:03.255614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.349 [2024-07-15 11:56:03.255624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.349 [2024-07-15 11:56:03.255633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.349 [2024-07-15 11:56:03.255651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.349 qpair failed and we were unable to recover it. 00:29:35.349 [2024-07-15 11:56:03.265556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.349 [2024-07-15 11:56:03.265639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.349 [2024-07-15 11:56:03.265656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.349 [2024-07-15 11:56:03.265666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.349 [2024-07-15 11:56:03.265675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.349 [2024-07-15 11:56:03.265693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.349 qpair failed and we were unable to recover it. 00:29:35.349 [2024-07-15 11:56:03.275586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.349 [2024-07-15 11:56:03.275669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.349 [2024-07-15 11:56:03.275686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.349 [2024-07-15 11:56:03.275696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.349 [2024-07-15 11:56:03.275705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.349 [2024-07-15 11:56:03.275726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.349 qpair failed and we were unable to recover it. 00:29:35.349 [2024-07-15 11:56:03.285592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.349 [2024-07-15 11:56:03.285675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.349 [2024-07-15 11:56:03.285693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.349 [2024-07-15 11:56:03.285703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.349 [2024-07-15 11:56:03.285711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.349 [2024-07-15 11:56:03.285730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.349 qpair failed and we were unable to recover it. 00:29:35.349 [2024-07-15 11:56:03.295667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.349 [2024-07-15 11:56:03.295752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.349 [2024-07-15 11:56:03.295769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.349 [2024-07-15 11:56:03.295779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.349 [2024-07-15 11:56:03.295788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.349 [2024-07-15 11:56:03.295806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.349 qpair failed and we were unable to recover it. 00:29:35.349 [2024-07-15 11:56:03.305688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.349 [2024-07-15 11:56:03.305773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.349 [2024-07-15 11:56:03.305790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.349 [2024-07-15 11:56:03.305800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.349 [2024-07-15 11:56:03.305809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.349 [2024-07-15 11:56:03.305827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.349 qpair failed and we were unable to recover it. 00:29:35.349 [2024-07-15 11:56:03.315702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.349 [2024-07-15 11:56:03.315784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.349 [2024-07-15 11:56:03.315801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.349 [2024-07-15 11:56:03.315811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.349 [2024-07-15 11:56:03.315819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.349 [2024-07-15 11:56:03.315841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.349 qpair failed and we were unable to recover it. 00:29:35.349 [2024-07-15 11:56:03.325731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.349 [2024-07-15 11:56:03.325815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.349 [2024-07-15 11:56:03.325836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.349 [2024-07-15 11:56:03.325846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.349 [2024-07-15 11:56:03.325855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.349 [2024-07-15 11:56:03.325873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.349 qpair failed and we were unable to recover it. 00:29:35.349 [2024-07-15 11:56:03.335788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.349 [2024-07-15 11:56:03.335873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.349 [2024-07-15 11:56:03.335891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.349 [2024-07-15 11:56:03.335900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.349 [2024-07-15 11:56:03.335909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.349 [2024-07-15 11:56:03.335926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.349 qpair failed and we were unable to recover it. 00:29:35.349 [2024-07-15 11:56:03.345807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.350 [2024-07-15 11:56:03.345890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.350 [2024-07-15 11:56:03.345907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.350 [2024-07-15 11:56:03.345917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.350 [2024-07-15 11:56:03.345926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.350 [2024-07-15 11:56:03.345943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.350 qpair failed and we were unable to recover it. 00:29:35.350 [2024-07-15 11:56:03.355823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.350 [2024-07-15 11:56:03.355908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.350 [2024-07-15 11:56:03.355925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.350 [2024-07-15 11:56:03.355935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.350 [2024-07-15 11:56:03.355944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.350 [2024-07-15 11:56:03.355962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.350 qpair failed and we were unable to recover it. 00:29:35.350 [2024-07-15 11:56:03.365849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.350 [2024-07-15 11:56:03.365930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.350 [2024-07-15 11:56:03.365948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.350 [2024-07-15 11:56:03.365957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.350 [2024-07-15 11:56:03.365969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.350 [2024-07-15 11:56:03.365987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.350 qpair failed and we were unable to recover it. 00:29:35.350 [2024-07-15 11:56:03.375884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.350 [2024-07-15 11:56:03.376010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.350 [2024-07-15 11:56:03.376027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.350 [2024-07-15 11:56:03.376037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.350 [2024-07-15 11:56:03.376046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.350 [2024-07-15 11:56:03.376064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.350 qpair failed and we were unable to recover it. 00:29:35.350 [2024-07-15 11:56:03.385885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.350 [2024-07-15 11:56:03.385971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.350 [2024-07-15 11:56:03.385990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.350 [2024-07-15 11:56:03.386000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.350 [2024-07-15 11:56:03.386009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.350 [2024-07-15 11:56:03.386027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.350 qpair failed and we were unable to recover it. 00:29:35.350 [2024-07-15 11:56:03.395936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.350 [2024-07-15 11:56:03.396017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.350 [2024-07-15 11:56:03.396035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.350 [2024-07-15 11:56:03.396044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.350 [2024-07-15 11:56:03.396053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.350 [2024-07-15 11:56:03.396071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.350 qpair failed and we were unable to recover it. 00:29:35.350 [2024-07-15 11:56:03.405988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.350 [2024-07-15 11:56:03.406098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.350 [2024-07-15 11:56:03.406115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.350 [2024-07-15 11:56:03.406125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.350 [2024-07-15 11:56:03.406134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:35.350 [2024-07-15 11:56:03.406153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.350 qpair failed and we were unable to recover it. 00:29:35.350 [2024-07-15 11:56:03.416034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.350 [2024-07-15 11:56:03.416152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.350 [2024-07-15 11:56:03.416181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.350 [2024-07-15 11:56:03.416196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.350 [2024-07-15 11:56:03.416208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.350 [2024-07-15 11:56:03.416234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.350 qpair failed and we were unable to recover it. 00:29:35.350 [2024-07-15 11:56:03.426038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.350 [2024-07-15 11:56:03.426156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.350 [2024-07-15 11:56:03.426175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.350 [2024-07-15 11:56:03.426184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.350 [2024-07-15 11:56:03.426193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.350 [2024-07-15 11:56:03.426211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.350 qpair failed and we were unable to recover it. 00:29:35.350 [2024-07-15 11:56:03.435987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.350 [2024-07-15 11:56:03.436071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.350 [2024-07-15 11:56:03.436089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.350 [2024-07-15 11:56:03.436099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.350 [2024-07-15 11:56:03.436108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.350 [2024-07-15 11:56:03.436125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.350 qpair failed and we were unable to recover it. 00:29:35.350 [2024-07-15 11:56:03.446074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.350 [2024-07-15 11:56:03.446159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.350 [2024-07-15 11:56:03.446177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.350 [2024-07-15 11:56:03.446187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.350 [2024-07-15 11:56:03.446195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.350 [2024-07-15 11:56:03.446213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.350 qpair failed and we were unable to recover it. 00:29:35.609 [2024-07-15 11:56:03.456115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.609 [2024-07-15 11:56:03.456202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.609 [2024-07-15 11:56:03.456220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.609 [2024-07-15 11:56:03.456235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.609 [2024-07-15 11:56:03.456244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.609 [2024-07-15 11:56:03.456261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.609 qpair failed and we were unable to recover it. 00:29:35.609 [2024-07-15 11:56:03.466131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.609 [2024-07-15 11:56:03.466216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.609 [2024-07-15 11:56:03.466234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.609 [2024-07-15 11:56:03.466244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.609 [2024-07-15 11:56:03.466253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.609 [2024-07-15 11:56:03.466270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.609 qpair failed and we were unable to recover it. 00:29:35.609 [2024-07-15 11:56:03.476148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.609 [2024-07-15 11:56:03.476231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.609 [2024-07-15 11:56:03.476249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.609 [2024-07-15 11:56:03.476259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.609 [2024-07-15 11:56:03.476268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.609 [2024-07-15 11:56:03.476285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.609 qpair failed and we were unable to recover it. 00:29:35.609 [2024-07-15 11:56:03.486186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.609 [2024-07-15 11:56:03.486272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.609 [2024-07-15 11:56:03.486291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.609 [2024-07-15 11:56:03.486300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.609 [2024-07-15 11:56:03.486309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.609 [2024-07-15 11:56:03.486327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.609 qpair failed and we were unable to recover it. 00:29:35.609 [2024-07-15 11:56:03.496217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.609 [2024-07-15 11:56:03.496298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.609 [2024-07-15 11:56:03.496316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.609 [2024-07-15 11:56:03.496326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.609 [2024-07-15 11:56:03.496335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.609 [2024-07-15 11:56:03.496351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.609 qpair failed and we were unable to recover it. 00:29:35.609 [2024-07-15 11:56:03.506258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.609 [2024-07-15 11:56:03.506340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.609 [2024-07-15 11:56:03.506359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.609 [2024-07-15 11:56:03.506368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.609 [2024-07-15 11:56:03.506377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.609 [2024-07-15 11:56:03.506394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.609 qpair failed and we were unable to recover it. 00:29:35.609 [2024-07-15 11:56:03.516284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.609 [2024-07-15 11:56:03.516366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.609 [2024-07-15 11:56:03.516384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.609 [2024-07-15 11:56:03.516394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.609 [2024-07-15 11:56:03.516402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.610 [2024-07-15 11:56:03.516419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.610 qpair failed and we were unable to recover it. 00:29:35.610 [2024-07-15 11:56:03.526287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.610 [2024-07-15 11:56:03.526368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.610 [2024-07-15 11:56:03.526386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.610 [2024-07-15 11:56:03.526396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.610 [2024-07-15 11:56:03.526405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.610 [2024-07-15 11:56:03.526422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.610 qpair failed and we were unable to recover it. 00:29:35.610 [2024-07-15 11:56:03.536310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.610 [2024-07-15 11:56:03.536394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.610 [2024-07-15 11:56:03.536411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.610 [2024-07-15 11:56:03.536421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.610 [2024-07-15 11:56:03.536430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.610 [2024-07-15 11:56:03.536446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.610 qpair failed and we were unable to recover it. 00:29:35.610 [2024-07-15 11:56:03.546306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.610 [2024-07-15 11:56:03.546388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.610 [2024-07-15 11:56:03.546406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.610 [2024-07-15 11:56:03.546419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.610 [2024-07-15 11:56:03.546428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.610 [2024-07-15 11:56:03.546445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.610 qpair failed and we were unable to recover it. 00:29:35.610 [2024-07-15 11:56:03.556372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.610 [2024-07-15 11:56:03.556455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.610 [2024-07-15 11:56:03.556472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.610 [2024-07-15 11:56:03.556482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.610 [2024-07-15 11:56:03.556490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.610 [2024-07-15 11:56:03.556507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.610 qpair failed and we were unable to recover it. 00:29:35.610 [2024-07-15 11:56:03.566423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.610 [2024-07-15 11:56:03.566595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.610 [2024-07-15 11:56:03.566613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.610 [2024-07-15 11:56:03.566623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.610 [2024-07-15 11:56:03.566631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.610 [2024-07-15 11:56:03.566648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.610 qpair failed and we were unable to recover it. 00:29:35.610 [2024-07-15 11:56:03.576398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.610 [2024-07-15 11:56:03.576479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.610 [2024-07-15 11:56:03.576497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.610 [2024-07-15 11:56:03.576507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.610 [2024-07-15 11:56:03.576516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.610 [2024-07-15 11:56:03.576533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.610 qpair failed and we were unable to recover it. 00:29:35.610 [2024-07-15 11:56:03.586419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.610 [2024-07-15 11:56:03.586499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.610 [2024-07-15 11:56:03.586517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.610 [2024-07-15 11:56:03.586527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.610 [2024-07-15 11:56:03.586536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.610 [2024-07-15 11:56:03.586553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.610 qpair failed and we were unable to recover it. 00:29:35.610 [2024-07-15 11:56:03.596518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.610 [2024-07-15 11:56:03.596601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.610 [2024-07-15 11:56:03.596620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.610 [2024-07-15 11:56:03.596630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.610 [2024-07-15 11:56:03.596638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.610 [2024-07-15 11:56:03.596656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.610 qpair failed and we were unable to recover it. 00:29:35.610 [2024-07-15 11:56:03.606499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.610 [2024-07-15 11:56:03.606574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.610 [2024-07-15 11:56:03.606593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.610 [2024-07-15 11:56:03.606604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.610 [2024-07-15 11:56:03.606612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.610 [2024-07-15 11:56:03.606630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.610 qpair failed and we were unable to recover it. 00:29:35.610 [2024-07-15 11:56:03.616537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.610 [2024-07-15 11:56:03.616619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.610 [2024-07-15 11:56:03.616638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.610 [2024-07-15 11:56:03.616648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.610 [2024-07-15 11:56:03.616657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.610 [2024-07-15 11:56:03.616675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.610 qpair failed and we were unable to recover it. 00:29:35.610 [2024-07-15 11:56:03.626556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.610 [2024-07-15 11:56:03.626638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.610 [2024-07-15 11:56:03.626656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.610 [2024-07-15 11:56:03.626666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.610 [2024-07-15 11:56:03.626675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.610 [2024-07-15 11:56:03.626692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.610 qpair failed and we were unable to recover it. 00:29:35.610 [2024-07-15 11:56:03.636583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.610 [2024-07-15 11:56:03.636659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.610 [2024-07-15 11:56:03.636677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.610 [2024-07-15 11:56:03.636690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.610 [2024-07-15 11:56:03.636699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.610 [2024-07-15 11:56:03.636716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.610 qpair failed and we were unable to recover it. 00:29:35.610 [2024-07-15 11:56:03.646621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.610 [2024-07-15 11:56:03.646697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.610 [2024-07-15 11:56:03.646716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.610 [2024-07-15 11:56:03.646726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.610 [2024-07-15 11:56:03.646735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.610 [2024-07-15 11:56:03.646752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.610 qpair failed and we were unable to recover it. 00:29:35.610 [2024-07-15 11:56:03.656642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.610 [2024-07-15 11:56:03.656720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.610 [2024-07-15 11:56:03.656738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.610 [2024-07-15 11:56:03.656748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.610 [2024-07-15 11:56:03.656756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.610 [2024-07-15 11:56:03.656773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.610 qpair failed and we were unable to recover it. 00:29:35.610 [2024-07-15 11:56:03.666711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.611 [2024-07-15 11:56:03.666794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.611 [2024-07-15 11:56:03.666812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.611 [2024-07-15 11:56:03.666822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.611 [2024-07-15 11:56:03.666830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.611 [2024-07-15 11:56:03.666852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.611 qpair failed and we were unable to recover it. 00:29:35.611 [2024-07-15 11:56:03.676692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.611 [2024-07-15 11:56:03.676783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.611 [2024-07-15 11:56:03.676800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.611 [2024-07-15 11:56:03.676810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.611 [2024-07-15 11:56:03.676819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.611 [2024-07-15 11:56:03.676846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.611 qpair failed and we were unable to recover it. 00:29:35.611 [2024-07-15 11:56:03.686722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.611 [2024-07-15 11:56:03.686805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.611 [2024-07-15 11:56:03.686823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.611 [2024-07-15 11:56:03.686840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.611 [2024-07-15 11:56:03.686849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.611 [2024-07-15 11:56:03.686867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.611 qpair failed and we were unable to recover it. 00:29:35.611 [2024-07-15 11:56:03.696698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.611 [2024-07-15 11:56:03.696807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.611 [2024-07-15 11:56:03.696824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.611 [2024-07-15 11:56:03.696838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.611 [2024-07-15 11:56:03.696847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.611 [2024-07-15 11:56:03.696864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.611 qpair failed and we were unable to recover it. 00:29:35.611 [2024-07-15 11:56:03.706714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.611 [2024-07-15 11:56:03.706799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.611 [2024-07-15 11:56:03.706817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.611 [2024-07-15 11:56:03.706827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.611 [2024-07-15 11:56:03.706839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.611 [2024-07-15 11:56:03.706857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.611 qpair failed and we were unable to recover it. 00:29:35.870 [2024-07-15 11:56:03.716800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.870 [2024-07-15 11:56:03.716907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.870 [2024-07-15 11:56:03.716925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.870 [2024-07-15 11:56:03.716934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.870 [2024-07-15 11:56:03.716943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.870 [2024-07-15 11:56:03.716961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.870 qpair failed and we were unable to recover it. 00:29:35.870 [2024-07-15 11:56:03.726839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.870 [2024-07-15 11:56:03.726951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.870 [2024-07-15 11:56:03.727020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.870 [2024-07-15 11:56:03.727030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.870 [2024-07-15 11:56:03.727039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.870 [2024-07-15 11:56:03.727057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.870 qpair failed and we were unable to recover it. 00:29:35.870 [2024-07-15 11:56:03.736863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.870 [2024-07-15 11:56:03.736949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.870 [2024-07-15 11:56:03.736967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.870 [2024-07-15 11:56:03.736977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.871 [2024-07-15 11:56:03.736986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.871 [2024-07-15 11:56:03.737003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.871 qpair failed and we were unable to recover it. 00:29:35.871 [2024-07-15 11:56:03.746828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.871 [2024-07-15 11:56:03.746958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.871 [2024-07-15 11:56:03.746976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.871 [2024-07-15 11:56:03.746986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.871 [2024-07-15 11:56:03.746995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.871 [2024-07-15 11:56:03.747012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.871 qpair failed and we were unable to recover it. 00:29:35.871 [2024-07-15 11:56:03.756912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.871 [2024-07-15 11:56:03.756992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.871 [2024-07-15 11:56:03.757010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.871 [2024-07-15 11:56:03.757020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.871 [2024-07-15 11:56:03.757028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.871 [2024-07-15 11:56:03.757045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.871 qpair failed and we were unable to recover it. 00:29:35.871 [2024-07-15 11:56:03.766957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.871 [2024-07-15 11:56:03.767043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.871 [2024-07-15 11:56:03.767061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.871 [2024-07-15 11:56:03.767071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.871 [2024-07-15 11:56:03.767080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.871 [2024-07-15 11:56:03.767097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.871 qpair failed and we were unable to recover it. 00:29:35.871 [2024-07-15 11:56:03.776962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.871 [2024-07-15 11:56:03.777042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.871 [2024-07-15 11:56:03.777061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.871 [2024-07-15 11:56:03.777071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.871 [2024-07-15 11:56:03.777080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.871 [2024-07-15 11:56:03.777097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.871 qpair failed and we were unable to recover it. 00:29:35.871 [2024-07-15 11:56:03.786981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.871 [2024-07-15 11:56:03.787065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.871 [2024-07-15 11:56:03.787083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.871 [2024-07-15 11:56:03.787095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.871 [2024-07-15 11:56:03.787104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.871 [2024-07-15 11:56:03.787122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.871 qpair failed and we were unable to recover it. 00:29:35.871 [2024-07-15 11:56:03.797041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.871 [2024-07-15 11:56:03.797126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.871 [2024-07-15 11:56:03.797145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.871 [2024-07-15 11:56:03.797154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.871 [2024-07-15 11:56:03.797164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.871 [2024-07-15 11:56:03.797181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.871 qpair failed and we were unable to recover it. 00:29:35.871 [2024-07-15 11:56:03.807031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.871 [2024-07-15 11:56:03.807116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.871 [2024-07-15 11:56:03.807136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.871 [2024-07-15 11:56:03.807146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.871 [2024-07-15 11:56:03.807155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.871 [2024-07-15 11:56:03.807173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.871 qpair failed and we were unable to recover it. 00:29:35.871 [2024-07-15 11:56:03.817080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.871 [2024-07-15 11:56:03.817249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.871 [2024-07-15 11:56:03.817269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.871 [2024-07-15 11:56:03.817279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.871 [2024-07-15 11:56:03.817288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.871 [2024-07-15 11:56:03.817305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.871 qpair failed and we were unable to recover it. 00:29:35.871 [2024-07-15 11:56:03.827039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.871 [2024-07-15 11:56:03.827119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.871 [2024-07-15 11:56:03.827137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.871 [2024-07-15 11:56:03.827147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.871 [2024-07-15 11:56:03.827156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.871 [2024-07-15 11:56:03.827172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.871 qpair failed and we were unable to recover it. 00:29:35.871 [2024-07-15 11:56:03.837157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.871 [2024-07-15 11:56:03.837233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.871 [2024-07-15 11:56:03.837252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.871 [2024-07-15 11:56:03.837262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.871 [2024-07-15 11:56:03.837271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.871 [2024-07-15 11:56:03.837288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.871 qpair failed and we were unable to recover it. 00:29:35.871 [2024-07-15 11:56:03.847182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.871 [2024-07-15 11:56:03.847266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.871 [2024-07-15 11:56:03.847285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.871 [2024-07-15 11:56:03.847294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.871 [2024-07-15 11:56:03.847303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.871 [2024-07-15 11:56:03.847320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.871 qpair failed and we were unable to recover it. 00:29:35.871 [2024-07-15 11:56:03.857142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.871 [2024-07-15 11:56:03.857226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.871 [2024-07-15 11:56:03.857244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.871 [2024-07-15 11:56:03.857254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.871 [2024-07-15 11:56:03.857263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.871 [2024-07-15 11:56:03.857282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.871 qpair failed and we were unable to recover it. 00:29:35.871 [2024-07-15 11:56:03.867202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.871 [2024-07-15 11:56:03.867300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.871 [2024-07-15 11:56:03.867318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.871 [2024-07-15 11:56:03.867328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.871 [2024-07-15 11:56:03.867337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.871 [2024-07-15 11:56:03.867353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.871 qpair failed and we were unable to recover it. 00:29:35.871 [2024-07-15 11:56:03.877254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.871 [2024-07-15 11:56:03.877331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.871 [2024-07-15 11:56:03.877349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.871 [2024-07-15 11:56:03.877359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.871 [2024-07-15 11:56:03.877368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.872 [2024-07-15 11:56:03.877386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.872 qpair failed and we were unable to recover it. 00:29:35.872 [2024-07-15 11:56:03.887254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.872 [2024-07-15 11:56:03.887337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.872 [2024-07-15 11:56:03.887355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.872 [2024-07-15 11:56:03.887365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.872 [2024-07-15 11:56:03.887374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.872 [2024-07-15 11:56:03.887391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.872 qpair failed and we were unable to recover it. 00:29:35.872 [2024-07-15 11:56:03.897248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.872 [2024-07-15 11:56:03.897342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.872 [2024-07-15 11:56:03.897359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.872 [2024-07-15 11:56:03.897369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.872 [2024-07-15 11:56:03.897377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.872 [2024-07-15 11:56:03.897394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.872 qpair failed and we were unable to recover it. 00:29:35.872 [2024-07-15 11:56:03.907371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.872 [2024-07-15 11:56:03.907455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.872 [2024-07-15 11:56:03.907477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.872 [2024-07-15 11:56:03.907487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.872 [2024-07-15 11:56:03.907495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.872 [2024-07-15 11:56:03.907513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.872 qpair failed and we were unable to recover it. 00:29:35.872 [2024-07-15 11:56:03.917376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.872 [2024-07-15 11:56:03.917453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.872 [2024-07-15 11:56:03.917472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.872 [2024-07-15 11:56:03.917482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.872 [2024-07-15 11:56:03.917491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.872 [2024-07-15 11:56:03.917508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.872 qpair failed and we were unable to recover it. 00:29:35.872 [2024-07-15 11:56:03.927400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.872 [2024-07-15 11:56:03.927479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.872 [2024-07-15 11:56:03.927497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.872 [2024-07-15 11:56:03.927507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.872 [2024-07-15 11:56:03.927516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.872 [2024-07-15 11:56:03.927532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.872 qpair failed and we were unable to recover it. 00:29:35.872 [2024-07-15 11:56:03.937446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.872 [2024-07-15 11:56:03.937553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.872 [2024-07-15 11:56:03.937572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.872 [2024-07-15 11:56:03.937582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.872 [2024-07-15 11:56:03.937591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.872 [2024-07-15 11:56:03.937608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.872 qpair failed and we were unable to recover it. 00:29:35.872 [2024-07-15 11:56:03.947371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.872 [2024-07-15 11:56:03.947449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.872 [2024-07-15 11:56:03.947467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.872 [2024-07-15 11:56:03.947476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.872 [2024-07-15 11:56:03.947485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.872 [2024-07-15 11:56:03.947505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.872 qpair failed and we were unable to recover it. 00:29:35.872 [2024-07-15 11:56:03.957515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.872 [2024-07-15 11:56:03.957594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.872 [2024-07-15 11:56:03.957612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.872 [2024-07-15 11:56:03.957622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.872 [2024-07-15 11:56:03.957631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.872 [2024-07-15 11:56:03.957648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.872 qpair failed and we were unable to recover it. 00:29:35.872 [2024-07-15 11:56:03.967507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.872 [2024-07-15 11:56:03.967588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.872 [2024-07-15 11:56:03.967606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.872 [2024-07-15 11:56:03.967616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.872 [2024-07-15 11:56:03.967625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:35.872 [2024-07-15 11:56:03.967642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:35.872 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 11:56:03.977553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.132 [2024-07-15 11:56:03.977675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.132 [2024-07-15 11:56:03.977693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.132 [2024-07-15 11:56:03.977703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.132 [2024-07-15 11:56:03.977711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.132 [2024-07-15 11:56:03.977728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 11:56:03.987561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.132 [2024-07-15 11:56:03.987644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.132 [2024-07-15 11:56:03.987662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.132 [2024-07-15 11:56:03.987672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.132 [2024-07-15 11:56:03.987681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.132 [2024-07-15 11:56:03.987698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 11:56:03.997557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.132 [2024-07-15 11:56:03.997655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.132 [2024-07-15 11:56:03.997677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.132 [2024-07-15 11:56:03.997688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.132 [2024-07-15 11:56:03.997697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.132 [2024-07-15 11:56:03.997715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 11:56:04.007653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.132 [2024-07-15 11:56:04.007738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.132 [2024-07-15 11:56:04.007758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.132 [2024-07-15 11:56:04.007768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.132 [2024-07-15 11:56:04.007778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.132 [2024-07-15 11:56:04.007796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 11:56:04.017606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.132 [2024-07-15 11:56:04.017775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.132 [2024-07-15 11:56:04.017794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.132 [2024-07-15 11:56:04.017805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.132 [2024-07-15 11:56:04.017813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.132 [2024-07-15 11:56:04.017836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 11:56:04.027590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.132 [2024-07-15 11:56:04.027676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.132 [2024-07-15 11:56:04.027694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.132 [2024-07-15 11:56:04.027704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.132 [2024-07-15 11:56:04.027713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.132 [2024-07-15 11:56:04.027730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 11:56:04.037706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.132 [2024-07-15 11:56:04.037791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.132 [2024-07-15 11:56:04.037809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.132 [2024-07-15 11:56:04.037819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.132 [2024-07-15 11:56:04.037828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.132 [2024-07-15 11:56:04.037854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 11:56:04.047702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.132 [2024-07-15 11:56:04.047781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.132 [2024-07-15 11:56:04.047799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.132 [2024-07-15 11:56:04.047809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.132 [2024-07-15 11:56:04.047818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.132 [2024-07-15 11:56:04.047840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 11:56:04.057784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.132 [2024-07-15 11:56:04.057870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.132 [2024-07-15 11:56:04.057888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.132 [2024-07-15 11:56:04.057898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.132 [2024-07-15 11:56:04.057907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.132 [2024-07-15 11:56:04.057924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 11:56:04.067752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.132 [2024-07-15 11:56:04.067844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.132 [2024-07-15 11:56:04.067862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.132 [2024-07-15 11:56:04.067873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.132 [2024-07-15 11:56:04.067882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.132 [2024-07-15 11:56:04.067900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 11:56:04.077791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.132 [2024-07-15 11:56:04.077873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.132 [2024-07-15 11:56:04.077892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.132 [2024-07-15 11:56:04.077902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.132 [2024-07-15 11:56:04.077910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.132 [2024-07-15 11:56:04.077929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 11:56:04.087836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.132 [2024-07-15 11:56:04.087913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.132 [2024-07-15 11:56:04.087934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.132 [2024-07-15 11:56:04.087944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.132 [2024-07-15 11:56:04.087953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.132 [2024-07-15 11:56:04.087970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 11:56:04.097868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.132 [2024-07-15 11:56:04.097949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.132 [2024-07-15 11:56:04.097967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.132 [2024-07-15 11:56:04.097978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.132 [2024-07-15 11:56:04.097986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.132 [2024-07-15 11:56:04.098003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 11:56:04.107877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.132 [2024-07-15 11:56:04.107988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.132 [2024-07-15 11:56:04.108007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.132 [2024-07-15 11:56:04.108016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.132 [2024-07-15 11:56:04.108025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.132 [2024-07-15 11:56:04.108043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.132 qpair failed and we were unable to recover it. 00:29:36.132 [2024-07-15 11:56:04.117842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.133 [2024-07-15 11:56:04.117927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.133 [2024-07-15 11:56:04.117945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.133 [2024-07-15 11:56:04.117955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.133 [2024-07-15 11:56:04.117964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.133 [2024-07-15 11:56:04.117980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 11:56:04.127934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.133 [2024-07-15 11:56:04.128021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.133 [2024-07-15 11:56:04.128039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.133 [2024-07-15 11:56:04.128049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.133 [2024-07-15 11:56:04.128063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.133 [2024-07-15 11:56:04.128080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 11:56:04.137952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.133 [2024-07-15 11:56:04.138037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.133 [2024-07-15 11:56:04.138056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.133 [2024-07-15 11:56:04.138065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.133 [2024-07-15 11:56:04.138074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.133 [2024-07-15 11:56:04.138091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 11:56:04.147959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.133 [2024-07-15 11:56:04.148062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.133 [2024-07-15 11:56:04.148080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.133 [2024-07-15 11:56:04.148090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.133 [2024-07-15 11:56:04.148099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.133 [2024-07-15 11:56:04.148116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 11:56:04.157952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.133 [2024-07-15 11:56:04.158031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.133 [2024-07-15 11:56:04.158049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.133 [2024-07-15 11:56:04.158059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.133 [2024-07-15 11:56:04.158067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.133 [2024-07-15 11:56:04.158084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 11:56:04.168071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.133 [2024-07-15 11:56:04.168150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.133 [2024-07-15 11:56:04.168168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.133 [2024-07-15 11:56:04.168178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.133 [2024-07-15 11:56:04.168186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.133 [2024-07-15 11:56:04.168203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 11:56:04.178068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.133 [2024-07-15 11:56:04.178154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.133 [2024-07-15 11:56:04.178172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.133 [2024-07-15 11:56:04.178182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.133 [2024-07-15 11:56:04.178190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.133 [2024-07-15 11:56:04.178208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 11:56:04.188108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.133 [2024-07-15 11:56:04.188194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.133 [2024-07-15 11:56:04.188211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.133 [2024-07-15 11:56:04.188221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.133 [2024-07-15 11:56:04.188230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.133 [2024-07-15 11:56:04.188247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 11:56:04.198183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.133 [2024-07-15 11:56:04.198297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.133 [2024-07-15 11:56:04.198315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.133 [2024-07-15 11:56:04.198324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.133 [2024-07-15 11:56:04.198333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.133 [2024-07-15 11:56:04.198350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 11:56:04.208175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.133 [2024-07-15 11:56:04.208258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.133 [2024-07-15 11:56:04.208276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.133 [2024-07-15 11:56:04.208286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.133 [2024-07-15 11:56:04.208295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.133 [2024-07-15 11:56:04.208313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 11:56:04.218210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.133 [2024-07-15 11:56:04.218294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.133 [2024-07-15 11:56:04.218312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.133 [2024-07-15 11:56:04.218322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.133 [2024-07-15 11:56:04.218333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.133 [2024-07-15 11:56:04.218351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.133 [2024-07-15 11:56:04.228202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.133 [2024-07-15 11:56:04.228279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.133 [2024-07-15 11:56:04.228297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.133 [2024-07-15 11:56:04.228306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.133 [2024-07-15 11:56:04.228315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.133 [2024-07-15 11:56:04.228333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.133 qpair failed and we were unable to recover it. 00:29:36.392 [2024-07-15 11:56:04.238184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.392 [2024-07-15 11:56:04.238260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.392 [2024-07-15 11:56:04.238278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.392 [2024-07-15 11:56:04.238287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.392 [2024-07-15 11:56:04.238296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.392 [2024-07-15 11:56:04.238312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.392 qpair failed and we were unable to recover it. 00:29:36.392 [2024-07-15 11:56:04.248200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.392 [2024-07-15 11:56:04.248283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.392 [2024-07-15 11:56:04.248301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.392 [2024-07-15 11:56:04.248311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.392 [2024-07-15 11:56:04.248320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.392 [2024-07-15 11:56:04.248337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.393 qpair failed and we were unable to recover it. 00:29:36.393 [2024-07-15 11:56:04.258254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.393 [2024-07-15 11:56:04.258343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.393 [2024-07-15 11:56:04.258360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.393 [2024-07-15 11:56:04.258370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.393 [2024-07-15 11:56:04.258379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.393 [2024-07-15 11:56:04.258396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.393 qpair failed and we were unable to recover it. 00:29:36.393 [2024-07-15 11:56:04.268315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.393 [2024-07-15 11:56:04.268414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.393 [2024-07-15 11:56:04.268432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.393 [2024-07-15 11:56:04.268442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.393 [2024-07-15 11:56:04.268450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.393 [2024-07-15 11:56:04.268467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.393 qpair failed and we were unable to recover it. 00:29:36.393 [2024-07-15 11:56:04.278401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.393 [2024-07-15 11:56:04.278514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.393 [2024-07-15 11:56:04.278532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.393 [2024-07-15 11:56:04.278541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.393 [2024-07-15 11:56:04.278550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.393 [2024-07-15 11:56:04.278567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.393 qpair failed and we were unable to recover it. 00:29:36.393 [2024-07-15 11:56:04.288335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.393 [2024-07-15 11:56:04.288439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.393 [2024-07-15 11:56:04.288456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.393 [2024-07-15 11:56:04.288465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.393 [2024-07-15 11:56:04.288474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.393 [2024-07-15 11:56:04.288492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.393 qpair failed and we were unable to recover it. 00:29:36.393 [2024-07-15 11:56:04.298446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.393 [2024-07-15 11:56:04.298531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.393 [2024-07-15 11:56:04.298551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.393 [2024-07-15 11:56:04.298561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.393 [2024-07-15 11:56:04.298570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.393 [2024-07-15 11:56:04.298587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.393 qpair failed and we were unable to recover it. 00:29:36.393 [2024-07-15 11:56:04.308479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.393 [2024-07-15 11:56:04.308557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.393 [2024-07-15 11:56:04.308576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.393 [2024-07-15 11:56:04.308586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.393 [2024-07-15 11:56:04.308598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.393 [2024-07-15 11:56:04.308615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.393 qpair failed and we were unable to recover it. 00:29:36.393 [2024-07-15 11:56:04.318438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.393 [2024-07-15 11:56:04.318520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.393 [2024-07-15 11:56:04.318538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.393 [2024-07-15 11:56:04.318548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.393 [2024-07-15 11:56:04.318557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.393 [2024-07-15 11:56:04.318573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.393 qpair failed and we were unable to recover it. 00:29:36.393 [2024-07-15 11:56:04.328449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.393 [2024-07-15 11:56:04.328526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.393 [2024-07-15 11:56:04.328544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.393 [2024-07-15 11:56:04.328554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.393 [2024-07-15 11:56:04.328562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.393 [2024-07-15 11:56:04.328580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.393 qpair failed and we were unable to recover it. 00:29:36.393 [2024-07-15 11:56:04.338470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.393 [2024-07-15 11:56:04.338552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.393 [2024-07-15 11:56:04.338570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.393 [2024-07-15 11:56:04.338580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.393 [2024-07-15 11:56:04.338588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.393 [2024-07-15 11:56:04.338605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.393 qpair failed and we were unable to recover it. 00:29:36.393 [2024-07-15 11:56:04.348544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.393 [2024-07-15 11:56:04.348628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.393 [2024-07-15 11:56:04.348646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.393 [2024-07-15 11:56:04.348655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.393 [2024-07-15 11:56:04.348664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.393 [2024-07-15 11:56:04.348681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.393 qpair failed and we were unable to recover it. 00:29:36.393 [2024-07-15 11:56:04.358597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.393 [2024-07-15 11:56:04.358674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.393 [2024-07-15 11:56:04.358692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.393 [2024-07-15 11:56:04.358702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.394 [2024-07-15 11:56:04.358710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.394 [2024-07-15 11:56:04.358727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.394 qpair failed and we were unable to recover it. 00:29:36.394 [2024-07-15 11:56:04.368597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.394 [2024-07-15 11:56:04.368709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.394 [2024-07-15 11:56:04.368727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.394 [2024-07-15 11:56:04.368737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.394 [2024-07-15 11:56:04.368746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.394 [2024-07-15 11:56:04.368763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.394 qpair failed and we were unable to recover it. 00:29:36.394 [2024-07-15 11:56:04.378573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.394 [2024-07-15 11:56:04.378651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.394 [2024-07-15 11:56:04.378669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.394 [2024-07-15 11:56:04.378679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.394 [2024-07-15 11:56:04.378688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.394 [2024-07-15 11:56:04.378705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.394 qpair failed and we were unable to recover it. 00:29:36.394 [2024-07-15 11:56:04.388689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.394 [2024-07-15 11:56:04.388801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.394 [2024-07-15 11:56:04.388818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.394 [2024-07-15 11:56:04.388828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.394 [2024-07-15 11:56:04.388842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.394 [2024-07-15 11:56:04.388860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.394 qpair failed and we were unable to recover it. 00:29:36.394 [2024-07-15 11:56:04.398630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.394 [2024-07-15 11:56:04.398716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.394 [2024-07-15 11:56:04.398734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.394 [2024-07-15 11:56:04.398747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.394 [2024-07-15 11:56:04.398755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.394 [2024-07-15 11:56:04.398772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.394 qpair failed and we were unable to recover it. 00:29:36.394 [2024-07-15 11:56:04.408650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.394 [2024-07-15 11:56:04.408733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.394 [2024-07-15 11:56:04.408751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.394 [2024-07-15 11:56:04.408760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.394 [2024-07-15 11:56:04.408769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.394 [2024-07-15 11:56:04.408786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.394 qpair failed and we were unable to recover it. 00:29:36.394 [2024-07-15 11:56:04.418754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.394 [2024-07-15 11:56:04.418840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.394 [2024-07-15 11:56:04.418858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.394 [2024-07-15 11:56:04.418869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.394 [2024-07-15 11:56:04.418878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.394 [2024-07-15 11:56:04.418895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.394 qpair failed and we were unable to recover it. 00:29:36.394 [2024-07-15 11:56:04.428790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.394 [2024-07-15 11:56:04.428875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.394 [2024-07-15 11:56:04.428893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.394 [2024-07-15 11:56:04.428903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.394 [2024-07-15 11:56:04.428911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.394 [2024-07-15 11:56:04.428928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.394 qpair failed and we were unable to recover it. 00:29:36.394 [2024-07-15 11:56:04.438823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.394 [2024-07-15 11:56:04.438942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.394 [2024-07-15 11:56:04.438960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.394 [2024-07-15 11:56:04.438970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.394 [2024-07-15 11:56:04.438979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.394 [2024-07-15 11:56:04.438996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.394 qpair failed and we were unable to recover it. 00:29:36.394 [2024-07-15 11:56:04.448848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.394 [2024-07-15 11:56:04.448948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.394 [2024-07-15 11:56:04.448965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.394 [2024-07-15 11:56:04.448975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.394 [2024-07-15 11:56:04.448984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.394 [2024-07-15 11:56:04.449000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.394 qpair failed and we were unable to recover it. 00:29:36.394 [2024-07-15 11:56:04.458972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.394 [2024-07-15 11:56:04.459064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.394 [2024-07-15 11:56:04.459082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.394 [2024-07-15 11:56:04.459091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.394 [2024-07-15 11:56:04.459100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.394 [2024-07-15 11:56:04.459117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.394 qpair failed and we were unable to recover it. 00:29:36.394 [2024-07-15 11:56:04.468899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.394 [2024-07-15 11:56:04.469002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.394 [2024-07-15 11:56:04.469020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.394 [2024-07-15 11:56:04.469029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.394 [2024-07-15 11:56:04.469038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.395 [2024-07-15 11:56:04.469056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.395 qpair failed and we were unable to recover it. 00:29:36.395 [2024-07-15 11:56:04.478930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.395 [2024-07-15 11:56:04.479057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.395 [2024-07-15 11:56:04.479075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.395 [2024-07-15 11:56:04.479085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.395 [2024-07-15 11:56:04.479093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.395 [2024-07-15 11:56:04.479111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.395 qpair failed and we were unable to recover it. 00:29:36.395 [2024-07-15 11:56:04.488986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.395 [2024-07-15 11:56:04.489076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.395 [2024-07-15 11:56:04.489093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.395 [2024-07-15 11:56:04.489105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.395 [2024-07-15 11:56:04.489114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.395 [2024-07-15 11:56:04.489131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.395 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.499017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.499113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.499131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.499141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.499149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.654 [2024-07-15 11:56:04.499166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.654 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.509016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.509101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.509120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.509130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.509138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.654 [2024-07-15 11:56:04.509156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.654 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.519077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.519161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.519179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.519189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.519197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.654 [2024-07-15 11:56:04.519214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.654 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.529100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.529182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.529200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.529210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.529218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.654 [2024-07-15 11:56:04.529235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.654 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.539119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.539202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.539220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.539229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.539238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.654 [2024-07-15 11:56:04.539254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.654 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.549137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.549217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.549235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.549245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.549253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.654 [2024-07-15 11:56:04.549271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.654 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.559137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.559214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.559232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.559241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.559250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.654 [2024-07-15 11:56:04.559266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.654 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.569224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.569336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.569353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.569363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.569372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.654 [2024-07-15 11:56:04.569389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.654 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.579260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.579369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.579387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.579400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.579409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.654 [2024-07-15 11:56:04.579426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.654 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.589242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.589357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.589375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.589384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.589393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.654 [2024-07-15 11:56:04.589410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.654 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.599311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.599399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.599417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.599427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.599435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.654 [2024-07-15 11:56:04.599452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.654 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.609285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.609379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.609397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.609406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.609415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.654 [2024-07-15 11:56:04.609433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.654 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.619301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.619468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.619487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.619497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.619506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.654 [2024-07-15 11:56:04.619524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.654 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.629352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.629460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.629478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.629488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.629497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.654 [2024-07-15 11:56:04.629515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.654 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.639306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.639383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.639401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.639412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.639420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.654 [2024-07-15 11:56:04.639438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.654 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.649412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.649491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.649509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.649519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.649528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.654 [2024-07-15 11:56:04.649544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.654 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.659483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.659563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.659580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.659590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.659599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.654 [2024-07-15 11:56:04.659616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.654 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.669463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.669542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.669564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.669574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.669582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.654 [2024-07-15 11:56:04.669599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.654 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.679502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.679580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.679599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.679609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.679617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.654 [2024-07-15 11:56:04.679634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.654 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.689525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.689604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.689621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.689631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.689639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.654 [2024-07-15 11:56:04.689656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.654 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.699553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.699632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.699650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.699659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.699668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.654 [2024-07-15 11:56:04.699685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.654 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.709499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.709580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.709598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.709608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.709617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.654 [2024-07-15 11:56:04.709634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.654 qpair failed and we were unable to recover it. 00:29:36.654 [2024-07-15 11:56:04.719575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.654 [2024-07-15 11:56:04.719650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.654 [2024-07-15 11:56:04.719667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.654 [2024-07-15 11:56:04.719677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.654 [2024-07-15 11:56:04.719685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.655 [2024-07-15 11:56:04.719702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.655 qpair failed and we were unable to recover it. 00:29:36.655 [2024-07-15 11:56:04.729644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.655 [2024-07-15 11:56:04.729725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.655 [2024-07-15 11:56:04.729742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.655 [2024-07-15 11:56:04.729751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.655 [2024-07-15 11:56:04.729760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.655 [2024-07-15 11:56:04.729777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.655 qpair failed and we were unable to recover it. 00:29:36.655 [2024-07-15 11:56:04.739664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.655 [2024-07-15 11:56:04.739748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.655 [2024-07-15 11:56:04.739766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.655 [2024-07-15 11:56:04.739775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.655 [2024-07-15 11:56:04.739784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.655 [2024-07-15 11:56:04.739801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.655 qpair failed and we were unable to recover it. 00:29:36.655 [2024-07-15 11:56:04.749617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.655 [2024-07-15 11:56:04.749696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.655 [2024-07-15 11:56:04.749714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.655 [2024-07-15 11:56:04.749724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.655 [2024-07-15 11:56:04.749732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.655 [2024-07-15 11:56:04.749749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.655 qpair failed and we were unable to recover it. 00:29:36.915 [2024-07-15 11:56:04.759710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.915 [2024-07-15 11:56:04.759786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.915 [2024-07-15 11:56:04.759806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.915 [2024-07-15 11:56:04.759817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.915 [2024-07-15 11:56:04.759825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.915 [2024-07-15 11:56:04.759845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.915 qpair failed and we were unable to recover it. 00:29:36.915 [2024-07-15 11:56:04.769759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.915 [2024-07-15 11:56:04.769876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.915 [2024-07-15 11:56:04.769894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.915 [2024-07-15 11:56:04.769903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.915 [2024-07-15 11:56:04.769913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.915 [2024-07-15 11:56:04.769929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.915 qpair failed and we were unable to recover it. 00:29:36.915 [2024-07-15 11:56:04.779691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.915 [2024-07-15 11:56:04.779771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.915 [2024-07-15 11:56:04.779789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.915 [2024-07-15 11:56:04.779800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.915 [2024-07-15 11:56:04.779808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.915 [2024-07-15 11:56:04.779825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.915 qpair failed and we were unable to recover it. 00:29:36.915 [2024-07-15 11:56:04.789792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.915 [2024-07-15 11:56:04.789886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.915 [2024-07-15 11:56:04.789904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.915 [2024-07-15 11:56:04.789913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.915 [2024-07-15 11:56:04.789922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.915 [2024-07-15 11:56:04.789939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.915 qpair failed and we were unable to recover it. 00:29:36.915 [2024-07-15 11:56:04.799851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.915 [2024-07-15 11:56:04.799955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.915 [2024-07-15 11:56:04.799973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.915 [2024-07-15 11:56:04.799983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.915 [2024-07-15 11:56:04.799991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.915 [2024-07-15 11:56:04.800011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.915 qpair failed and we were unable to recover it. 00:29:36.915 [2024-07-15 11:56:04.809879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.915 [2024-07-15 11:56:04.809988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.915 [2024-07-15 11:56:04.810006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.915 [2024-07-15 11:56:04.810016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.915 [2024-07-15 11:56:04.810025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.915 [2024-07-15 11:56:04.810042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.915 qpair failed and we were unable to recover it. 00:29:36.915 [2024-07-15 11:56:04.819905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.915 [2024-07-15 11:56:04.820011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.915 [2024-07-15 11:56:04.820029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.915 [2024-07-15 11:56:04.820038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.915 [2024-07-15 11:56:04.820047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.915 [2024-07-15 11:56:04.820064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.915 qpair failed and we were unable to recover it. 00:29:36.915 [2024-07-15 11:56:04.829838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.915 [2024-07-15 11:56:04.829917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.915 [2024-07-15 11:56:04.829935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.915 [2024-07-15 11:56:04.829945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.915 [2024-07-15 11:56:04.829953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.915 [2024-07-15 11:56:04.829970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.915 qpair failed and we were unable to recover it. 00:29:36.915 [2024-07-15 11:56:04.839911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.915 [2024-07-15 11:56:04.840023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.915 [2024-07-15 11:56:04.840041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.915 [2024-07-15 11:56:04.840051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.915 [2024-07-15 11:56:04.840059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.915 [2024-07-15 11:56:04.840077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.915 qpair failed and we were unable to recover it. 00:29:36.915 [2024-07-15 11:56:04.849969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.915 [2024-07-15 11:56:04.850051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.915 [2024-07-15 11:56:04.850072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.916 [2024-07-15 11:56:04.850081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.916 [2024-07-15 11:56:04.850089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x91d210 00:29:36.916 [2024-07-15 11:56:04.850106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.916 qpair failed and we were unable to recover it. 00:29:36.916 [2024-07-15 11:56:04.860109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.916 [2024-07-15 11:56:04.860215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.916 [2024-07-15 11:56:04.860243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.916 [2024-07-15 11:56:04.860258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.916 [2024-07-15 11:56:04.860271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1274000b90 00:29:36.916 [2024-07-15 11:56:04.860298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.916 qpair failed and we were unable to recover it. 00:29:36.916 [2024-07-15 11:56:04.870016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.916 [2024-07-15 11:56:04.870120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.916 [2024-07-15 11:56:04.870138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.916 [2024-07-15 11:56:04.870148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.916 [2024-07-15 11:56:04.870156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1274000b90 00:29:36.916 [2024-07-15 11:56:04.870176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.916 qpair failed and we were unable to recover it. 00:29:36.916 [2024-07-15 11:56:04.880087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.916 [2024-07-15 11:56:04.880231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.916 [2024-07-15 11:56:04.880260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.916 [2024-07-15 11:56:04.880275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.916 [2024-07-15 11:56:04.880288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:36.916 [2024-07-15 11:56:04.880316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.916 qpair failed and we were unable to recover it. 00:29:36.916 [2024-07-15 11:56:04.890117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.916 [2024-07-15 11:56:04.890240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.916 [2024-07-15 11:56:04.890258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.916 [2024-07-15 11:56:04.890268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.916 [2024-07-15 11:56:04.890277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1284000b90 00:29:36.916 [2024-07-15 11:56:04.890300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.916 qpair failed and we were unable to recover it. 00:29:36.916 [2024-07-15 11:56:04.900126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.916 [2024-07-15 11:56:04.900259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.916 [2024-07-15 11:56:04.900282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.916 [2024-07-15 11:56:04.900293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.916 [2024-07-15 11:56:04.900302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:36.916 [2024-07-15 11:56:04.900323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.916 qpair failed and we were unable to recover it. 00:29:36.916 [2024-07-15 11:56:04.910136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.916 [2024-07-15 11:56:04.910267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.916 [2024-07-15 11:56:04.910285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.916 [2024-07-15 11:56:04.910295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.916 [2024-07-15 11:56:04.910304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f127c000b90 00:29:36.916 [2024-07-15 11:56:04.910322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.916 qpair failed and we were unable to recover it. 00:29:36.916 [2024-07-15 11:56:04.910391] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:36.916 A controller has encountered a failure and is being reset. 00:29:36.916 Controller properly reset. 00:29:36.916 Initializing NVMe Controllers 00:29:36.916 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:36.916 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:36.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:36.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:36.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:36.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:36.916 Initialization complete. Launching workers. 00:29:36.916 Starting thread on core 1 00:29:36.916 Starting thread on core 2 00:29:36.916 Starting thread on core 3 00:29:36.916 Starting thread on core 0 00:29:36.916 11:56:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:36.916 00:29:36.916 real 0m11.332s 00:29:36.916 user 0m20.616s 00:29:36.916 sys 0m4.892s 00:29:36.916 11:56:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:36.916 11:56:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.916 ************************************ 00:29:36.916 END TEST nvmf_target_disconnect_tc2 00:29:36.916 ************************************ 00:29:36.916 11:56:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:29:36.916 11:56:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:36.916 11:56:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:36.916 11:56:04 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:36.916 11:56:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:36.916 11:56:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:36.916 11:56:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:36.916 11:56:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:36.916 11:56:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:36.916 11:56:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:36.916 rmmod nvme_tcp 00:29:36.916 rmmod nvme_fabrics 00:29:37.175 rmmod nvme_keyring 00:29:37.175 11:56:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:37.175 11:56:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:37.175 11:56:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:37.175 11:56:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2138057 ']' 00:29:37.175 11:56:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2138057 00:29:37.175 11:56:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2138057 ']' 00:29:37.175 11:56:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2138057 00:29:37.175 11:56:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:29:37.175 11:56:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:37.175 11:56:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2138057 00:29:37.175 11:56:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:29:37.175 11:56:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:29:37.175 11:56:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2138057' 00:29:37.175 killing process with pid 2138057 00:29:37.175 11:56:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2138057 00:29:37.175 11:56:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2138057 00:29:37.433 11:56:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:37.433 11:56:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:37.433 11:56:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:37.433 11:56:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:37.433 11:56:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:37.433 11:56:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.433 11:56:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:37.433 11:56:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.340 11:56:07 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:39.340 00:29:39.340 real 0m20.340s 00:29:39.340 user 0m47.851s 00:29:39.340 sys 0m10.123s 00:29:39.340 11:56:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:39.340 11:56:07 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:39.340 ************************************ 00:29:39.340 END TEST nvmf_target_disconnect 00:29:39.340 ************************************ 00:29:39.340 11:56:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:39.340 11:56:07 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:29:39.340 11:56:07 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:39.340 11:56:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.600 11:56:07 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:29:39.600 00:29:39.600 real 22m17.136s 00:29:39.600 user 45m21.425s 00:29:39.600 sys 8m14.925s 00:29:39.600 11:56:07 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:39.600 11:56:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.600 ************************************ 00:29:39.600 END TEST nvmf_tcp 00:29:39.600 ************************************ 00:29:39.600 11:56:07 -- common/autotest_common.sh@1142 -- # return 0 00:29:39.600 11:56:07 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:29:39.600 11:56:07 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:39.600 11:56:07 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:39.600 11:56:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:39.600 11:56:07 -- common/autotest_common.sh@10 -- # set +x 00:29:39.600 ************************************ 00:29:39.600 START TEST spdkcli_nvmf_tcp 00:29:39.600 ************************************ 00:29:39.600 11:56:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:39.600 * Looking for test storage... 00:29:39.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:39.600 11:56:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:39.600 11:56:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:39.600 11:56:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:39.600 11:56:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.600 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:39.600 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.600 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.600 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.600 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.600 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.600 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.600 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2139780 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2139780 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2139780 ']' 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:39.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.601 11:56:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:39.860 [2024-07-15 11:56:07.739371] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:29:39.860 [2024-07-15 11:56:07.739423] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2139780 ] 00:29:39.860 EAL: No free 2048 kB hugepages reported on node 1 00:29:39.860 [2024-07-15 11:56:07.808972] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:39.860 [2024-07-15 11:56:07.884401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.860 [2024-07-15 11:56:07.884404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.429 11:56:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:40.429 11:56:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:29:40.429 11:56:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:40.429 11:56:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:40.429 11:56:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:40.688 11:56:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:40.688 11:56:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:40.688 11:56:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:40.688 11:56:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:40.688 11:56:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:40.689 11:56:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:40.689 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:40.689 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:40.689 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:40.689 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:40.689 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:40.689 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:40.689 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:40.689 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:40.689 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:40.689 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:40.689 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:40.689 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:40.689 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:40.689 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:40.689 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:40.689 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:40.689 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:40.689 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:40.689 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:40.689 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:40.689 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:40.689 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:40.689 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:40.689 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:40.689 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:40.689 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:40.689 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:40.689 ' 00:29:43.249 [2024-07-15 11:56:11.162752] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:44.627 [2024-07-15 11:56:12.447071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:47.160 [2024-07-15 11:56:14.842214] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:49.066 [2024-07-15 11:56:16.900528] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:50.445 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:50.445 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:50.445 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:50.445 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:50.445 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:50.445 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:50.445 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:50.445 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:50.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:50.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:50.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:50.445 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:50.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:50.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:50.445 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:50.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:50.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:50.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:50.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:50.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:50.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:50.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:50.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:50.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:50.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:50.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:50.445 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:50.445 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:50.705 11:56:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:50.705 11:56:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:50.705 11:56:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:50.705 11:56:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:50.705 11:56:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:50.705 11:56:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:50.705 11:56:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:50.705 11:56:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:50.963 11:56:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:50.963 11:56:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:50.963 11:56:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:50.963 11:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:50.963 11:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:50.963 11:56:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:50.963 11:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:50.963 11:56:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:50.963 11:56:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:50.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:50.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:50.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:50.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:50.963 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:50.963 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:50.964 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:50.964 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:50.964 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:50.964 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:50.964 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:50.964 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:50.964 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:50.964 ' 00:29:56.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:56.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:56.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:56.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:56.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:56.230 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:56.230 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:56.230 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:56.230 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:56.230 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:56.230 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:56.230 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:56.230 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:56.230 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:56.230 11:56:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:56.230 11:56:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:56.230 11:56:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2139780 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2139780 ']' 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2139780 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2139780 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2139780' 00:29:56.230 killing process with pid 2139780 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2139780 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2139780 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2139780 ']' 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2139780 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2139780 ']' 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2139780 00:29:56.230 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2139780) - No such process 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2139780 is not found' 00:29:56.230 Process with pid 2139780 is not found 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:56.230 00:29:56.230 real 0m16.714s 00:29:56.230 user 0m35.618s 00:29:56.230 sys 0m1.076s 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:56.230 11:56:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:56.230 ************************************ 00:29:56.230 END TEST spdkcli_nvmf_tcp 00:29:56.230 ************************************ 00:29:56.230 11:56:24 -- common/autotest_common.sh@1142 -- # return 0 00:29:56.230 11:56:24 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:56.230 11:56:24 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:56.230 11:56:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:56.230 11:56:24 -- common/autotest_common.sh@10 -- # set +x 00:29:56.489 ************************************ 00:29:56.489 START TEST nvmf_identify_passthru 00:29:56.489 ************************************ 00:29:56.489 11:56:24 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:56.489 * Looking for test storage... 00:29:56.489 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:56.489 11:56:24 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:56.489 11:56:24 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.489 11:56:24 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.489 11:56:24 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.489 11:56:24 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.489 11:56:24 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.489 11:56:24 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.489 11:56:24 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:56.489 11:56:24 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:56.489 11:56:24 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:56.489 11:56:24 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:56.489 11:56:24 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:56.489 11:56:24 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:56.489 11:56:24 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.489 11:56:24 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.489 11:56:24 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.489 11:56:24 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:56.489 11:56:24 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:56.489 11:56:24 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.489 11:56:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:56.489 11:56:24 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:56.489 11:56:24 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:56.489 11:56:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:03.053 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:03.053 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:03.053 Found net devices under 0000:af:00.0: cvl_0_0 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:03.053 Found net devices under 0000:af:00.1: cvl_0_1 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:03.053 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:03.054 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:03.054 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:03.054 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:03.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:03.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:30:03.054 00:30:03.054 --- 10.0.0.2 ping statistics --- 00:30:03.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.054 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:30:03.054 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:03.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:03.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:30:03.054 00:30:03.054 --- 10.0.0.1 ping statistics --- 00:30:03.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:03.054 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:30:03.054 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:03.054 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:03.054 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:03.054 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:03.054 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:03.054 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:03.054 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:03.054 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:03.054 11:56:30 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:03.054 11:56:30 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:03.054 11:56:30 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:03.054 11:56:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:03.054 11:56:30 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:03.054 11:56:30 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:30:03.054 11:56:30 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:30:03.054 11:56:30 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:30:03.054 11:56:30 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:30:03.054 11:56:30 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:03.054 11:56:30 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:30:03.054 11:56:30 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:03.054 11:56:30 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:03.054 11:56:30 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:03.054 11:56:30 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:03.054 11:56:30 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:d8:00.0 00:30:03.054 11:56:30 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:d8:00.0 00:30:03.054 11:56:30 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:d8:00.0 00:30:03.054 11:56:30 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:d8:00.0 ']' 00:30:03.054 11:56:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:30:03.054 11:56:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:03.054 11:56:30 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:03.054 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.329 11:56:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLN916500W71P6AGN 00:30:08.329 11:56:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:d8:00.0' -i 0 00:30:08.329 11:56:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:08.329 11:56:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:08.329 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.521 11:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:12.521 11:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:12.521 11:56:40 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:12.521 11:56:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:12.521 11:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:12.521 11:56:40 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:12.521 11:56:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:12.521 11:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2147266 00:30:12.521 11:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:12.521 11:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:12.521 11:56:40 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2147266 00:30:12.521 11:56:40 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2147266 ']' 00:30:12.521 11:56:40 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.521 11:56:40 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:12.521 11:56:40 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.521 11:56:40 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:12.521 11:56:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:12.521 [2024-07-15 11:56:40.462492] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:30:12.521 [2024-07-15 11:56:40.462542] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.521 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.521 [2024-07-15 11:56:40.536135] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:12.521 [2024-07-15 11:56:40.605837] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.521 [2024-07-15 11:56:40.605899] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.521 [2024-07-15 11:56:40.605908] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:12.521 [2024-07-15 11:56:40.605917] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:12.521 [2024-07-15 11:56:40.605924] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.521 [2024-07-15 11:56:40.605973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.521 [2024-07-15 11:56:40.606069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:12.521 [2024-07-15 11:56:40.606158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:12.521 [2024-07-15 11:56:40.606160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.459 11:56:41 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:13.459 11:56:41 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:30:13.459 11:56:41 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:13.459 11:56:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.459 11:56:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:13.459 INFO: Log level set to 20 00:30:13.459 INFO: Requests: 00:30:13.459 { 00:30:13.459 "jsonrpc": "2.0", 00:30:13.459 "method": "nvmf_set_config", 00:30:13.459 "id": 1, 00:30:13.459 "params": { 00:30:13.459 "admin_cmd_passthru": { 00:30:13.459 "identify_ctrlr": true 00:30:13.459 } 00:30:13.459 } 00:30:13.459 } 00:30:13.459 00:30:13.459 INFO: response: 00:30:13.459 { 00:30:13.459 "jsonrpc": "2.0", 00:30:13.459 "id": 1, 00:30:13.459 "result": true 00:30:13.459 } 00:30:13.459 00:30:13.459 11:56:41 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.459 11:56:41 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:13.459 11:56:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.459 11:56:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:13.459 INFO: Setting log level to 20 00:30:13.459 INFO: Setting log level to 20 00:30:13.459 INFO: Log level set to 20 00:30:13.459 INFO: Log level set to 20 00:30:13.459 INFO: Requests: 00:30:13.459 { 00:30:13.459 "jsonrpc": "2.0", 00:30:13.459 "method": "framework_start_init", 00:30:13.459 "id": 1 00:30:13.459 } 00:30:13.459 00:30:13.459 INFO: Requests: 00:30:13.459 { 00:30:13.459 "jsonrpc": "2.0", 00:30:13.459 "method": "framework_start_init", 00:30:13.459 "id": 1 00:30:13.459 } 00:30:13.459 00:30:13.459 [2024-07-15 11:56:41.354744] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:13.459 INFO: response: 00:30:13.459 { 00:30:13.459 "jsonrpc": "2.0", 00:30:13.459 "id": 1, 00:30:13.459 "result": true 00:30:13.459 } 00:30:13.459 00:30:13.459 INFO: response: 00:30:13.459 { 00:30:13.459 "jsonrpc": "2.0", 00:30:13.459 "id": 1, 00:30:13.459 "result": true 00:30:13.459 } 00:30:13.459 00:30:13.459 11:56:41 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.459 11:56:41 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:13.459 11:56:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.459 11:56:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:13.459 INFO: Setting log level to 40 00:30:13.459 INFO: Setting log level to 40 00:30:13.459 INFO: Setting log level to 40 00:30:13.459 [2024-07-15 11:56:41.368168] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:13.459 11:56:41 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:13.459 11:56:41 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:13.459 11:56:41 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:13.459 11:56:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:13.460 11:56:41 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 00:30:13.460 11:56:41 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:13.460 11:56:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:16.748 Nvme0n1 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.748 11:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.748 11:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.748 11:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:16.748 [2024-07-15 11:56:44.295137] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.748 11:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:16.748 [ 00:30:16.748 { 00:30:16.748 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:16.748 "subtype": "Discovery", 00:30:16.748 "listen_addresses": [], 00:30:16.748 "allow_any_host": true, 00:30:16.748 "hosts": [] 00:30:16.748 }, 00:30:16.748 { 00:30:16.748 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:16.748 "subtype": "NVMe", 00:30:16.748 "listen_addresses": [ 00:30:16.748 { 00:30:16.748 "trtype": "TCP", 00:30:16.748 "adrfam": "IPv4", 00:30:16.748 "traddr": "10.0.0.2", 00:30:16.748 "trsvcid": "4420" 00:30:16.748 } 00:30:16.748 ], 00:30:16.748 "allow_any_host": true, 00:30:16.748 "hosts": [], 00:30:16.748 "serial_number": "SPDK00000000000001", 00:30:16.748 "model_number": "SPDK bdev Controller", 00:30:16.748 "max_namespaces": 1, 00:30:16.748 "min_cntlid": 1, 00:30:16.748 "max_cntlid": 65519, 00:30:16.748 "namespaces": [ 00:30:16.748 { 00:30:16.748 "nsid": 1, 00:30:16.748 "bdev_name": "Nvme0n1", 00:30:16.748 "name": "Nvme0n1", 00:30:16.748 "nguid": "F890D138C8B24DE6BDE7BF0A4E91C56F", 00:30:16.748 "uuid": "f890d138-c8b2-4de6-bde7-bf0a4e91c56f" 00:30:16.748 } 00:30:16.748 ] 00:30:16.748 } 00:30:16.748 ] 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.748 11:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:16.748 11:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:16.748 11:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:16.748 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.748 11:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLN916500W71P6AGN 00:30:16.748 11:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:16.748 11:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:16.748 11:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:16.748 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.748 11:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:16.748 11:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLN916500W71P6AGN '!=' BTLN916500W71P6AGN ']' 00:30:16.748 11:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:16.748 11:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:16.748 11:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:16.748 11:56:44 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:16.748 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:16.748 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:16.748 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:16.748 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:16.748 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:16.748 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:16.748 rmmod nvme_tcp 00:30:16.748 rmmod nvme_fabrics 00:30:16.748 rmmod nvme_keyring 00:30:16.748 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:16.748 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:16.748 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:16.748 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2147266 ']' 00:30:16.748 11:56:44 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2147266 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2147266 ']' 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2147266 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2147266 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2147266' 00:30:16.748 killing process with pid 2147266 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2147266 00:30:16.748 11:56:44 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2147266 00:30:19.327 11:56:46 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:19.327 11:56:46 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:19.327 11:56:46 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:19.327 11:56:46 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:19.327 11:56:46 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:19.327 11:56:46 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:19.327 11:56:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:19.327 11:56:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.231 11:56:48 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:21.231 00:30:21.231 real 0m24.583s 00:30:21.231 user 0m33.196s 00:30:21.231 sys 0m6.205s 00:30:21.231 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:21.231 11:56:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:21.231 ************************************ 00:30:21.231 END TEST nvmf_identify_passthru 00:30:21.231 ************************************ 00:30:21.231 11:56:48 -- common/autotest_common.sh@1142 -- # return 0 00:30:21.231 11:56:48 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:21.231 11:56:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:21.231 11:56:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:21.231 11:56:48 -- common/autotest_common.sh@10 -- # set +x 00:30:21.231 ************************************ 00:30:21.231 START TEST nvmf_dif 00:30:21.231 ************************************ 00:30:21.231 11:56:49 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:21.231 * Looking for test storage... 00:30:21.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:21.231 11:56:49 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:21.231 11:56:49 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:21.231 11:56:49 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:21.231 11:56:49 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:21.231 11:56:49 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:21.231 11:56:49 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:21.231 11:56:49 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:21.231 11:56:49 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:21.231 11:56:49 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:21.231 11:56:49 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:21.231 11:56:49 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:21.231 11:56:49 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:21.231 11:56:49 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:30:21.231 11:56:49 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:30:21.231 11:56:49 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:21.231 11:56:49 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:21.231 11:56:49 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:21.231 11:56:49 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:21.231 11:56:49 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:21.231 11:56:49 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:21.231 11:56:49 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.231 11:56:49 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.231 11:56:49 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.231 11:56:49 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.232 11:56:49 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.232 11:56:49 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:21.232 11:56:49 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.232 11:56:49 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:21.232 11:56:49 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:21.232 11:56:49 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:21.232 11:56:49 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:21.232 11:56:49 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:21.232 11:56:49 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:21.232 11:56:49 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:21.232 11:56:49 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:21.232 11:56:49 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:21.232 11:56:49 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:21.232 11:56:49 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:21.232 11:56:49 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:21.232 11:56:49 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:21.232 11:56:49 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:21.232 11:56:49 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:21.232 11:56:49 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:21.232 11:56:49 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:21.232 11:56:49 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:21.232 11:56:49 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:21.232 11:56:49 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.232 11:56:49 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:21.232 11:56:49 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.232 11:56:49 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:21.232 11:56:49 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:21.232 11:56:49 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:21.232 11:56:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:27.804 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:27.804 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:27.804 Found net devices under 0000:af:00.0: cvl_0_0 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:27.804 11:56:55 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:27.805 Found net devices under 0000:af:00.1: cvl_0_1 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:27.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:27.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:30:27.805 00:30:27.805 --- 10.0.0.2 ping statistics --- 00:30:27.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.805 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:27.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:27.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:30:27.805 00:30:27.805 --- 10.0.0.1 ping statistics --- 00:30:27.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:27.805 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:27.805 11:56:55 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:30.341 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:30:30.341 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:30:30.341 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:30:30.341 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:30:30.341 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:30:30.341 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:30:30.341 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:30:30.341 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:30:30.341 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:30:30.341 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:30:30.341 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:30:30.341 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:30:30.341 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:30:30.341 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:30:30.341 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:30:30.341 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:30:30.341 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:30.341 11:56:58 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:30.341 11:56:58 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:30.341 11:56:58 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:30.341 11:56:58 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:30.341 11:56:58 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:30.341 11:56:58 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:30.341 11:56:58 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:30.341 11:56:58 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:30.341 11:56:58 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:30.341 11:56:58 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:30.341 11:56:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:30.341 11:56:58 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2153131 00:30:30.341 11:56:58 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2153131 00:30:30.341 11:56:58 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2153131 ']' 00:30:30.341 11:56:58 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:30.341 11:56:58 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:30.341 11:56:58 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:30.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:30.341 11:56:58 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:30.341 11:56:58 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:30.341 11:56:58 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:30.341 [2024-07-15 11:56:58.256547] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:30:30.341 [2024-07-15 11:56:58.256596] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:30.341 EAL: No free 2048 kB hugepages reported on node 1 00:30:30.341 [2024-07-15 11:56:58.331802] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.341 [2024-07-15 11:56:58.402692] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:30.341 [2024-07-15 11:56:58.402730] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:30.341 [2024-07-15 11:56:58.402740] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:30.341 [2024-07-15 11:56:58.402748] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:30.341 [2024-07-15 11:56:58.402755] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:30.341 [2024-07-15 11:56:58.402776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.278 11:56:59 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:31.278 11:56:59 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:30:31.278 11:56:59 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:31.278 11:56:59 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:31.278 11:56:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:31.278 11:56:59 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:31.278 11:56:59 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:31.278 11:56:59 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:31.278 11:56:59 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.278 11:56:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:31.278 [2024-07-15 11:56:59.088432] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.278 11:56:59 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.278 11:56:59 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:31.278 11:56:59 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:31.278 11:56:59 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:31.278 11:56:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:31.278 ************************************ 00:30:31.278 START TEST fio_dif_1_default 00:30:31.278 ************************************ 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:31.278 bdev_null0 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:31.278 [2024-07-15 11:56:59.160743] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:31.278 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:31.279 { 00:30:31.279 "params": { 00:30:31.279 "name": "Nvme$subsystem", 00:30:31.279 "trtype": "$TEST_TRANSPORT", 00:30:31.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.279 "adrfam": "ipv4", 00:30:31.279 "trsvcid": "$NVMF_PORT", 00:30:31.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.279 "hdgst": ${hdgst:-false}, 00:30:31.279 "ddgst": ${ddgst:-false} 00:30:31.279 }, 00:30:31.279 "method": "bdev_nvme_attach_controller" 00:30:31.279 } 00:30:31.279 EOF 00:30:31.279 )") 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:31.279 "params": { 00:30:31.279 "name": "Nvme0", 00:30:31.279 "trtype": "tcp", 00:30:31.279 "traddr": "10.0.0.2", 00:30:31.279 "adrfam": "ipv4", 00:30:31.279 "trsvcid": "4420", 00:30:31.279 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:31.279 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:31.279 "hdgst": false, 00:30:31.279 "ddgst": false 00:30:31.279 }, 00:30:31.279 "method": "bdev_nvme_attach_controller" 00:30:31.279 }' 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:31.279 11:56:59 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:31.538 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:31.538 fio-3.35 00:30:31.538 Starting 1 thread 00:30:31.538 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.756 00:30:43.756 filename0: (groupid=0, jobs=1): err= 0: pid=2153657: Mon Jul 15 11:57:10 2024 00:30:43.756 read: IOPS=188, BW=755KiB/s (773kB/s)(7552KiB/10009msec) 00:30:43.756 slat (nsec): min=3896, max=59033, avg=5867.97, stdev=1430.89 00:30:43.756 clat (usec): min=555, max=48267, avg=21187.85, stdev=20225.87 00:30:43.756 lat (usec): min=561, max=48289, avg=21193.72, stdev=20225.80 00:30:43.756 clat percentiles (usec): 00:30:43.756 | 1.00th=[ 840], 5.00th=[ 857], 10.00th=[ 865], 20.00th=[ 881], 00:30:43.756 | 30.00th=[ 889], 40.00th=[ 898], 50.00th=[41157], 60.00th=[41157], 00:30:43.756 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:30:43.756 | 99.00th=[42206], 99.50th=[42206], 99.90th=[48497], 99.95th=[48497], 00:30:43.756 | 99.99th=[48497] 00:30:43.756 bw ( KiB/s): min= 672, max= 768, per=99.80%, avg=753.60, stdev=28.39, samples=20 00:30:43.756 iops : min= 168, max= 192, avg=188.40, stdev= 7.10, samples=20 00:30:43.756 lat (usec) : 750=0.64%, 1000=48.52% 00:30:43.756 lat (msec) : 2=0.64%, 50=50.21% 00:30:43.756 cpu : usr=85.50%, sys=14.24%, ctx=20, majf=0, minf=243 00:30:43.756 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:43.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.756 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.756 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:43.756 00:30:43.756 Run status group 0 (all jobs): 00:30:43.756 READ: bw=755KiB/s (773kB/s), 755KiB/s-755KiB/s (773kB/s-773kB/s), io=7552KiB (7733kB), run=10009-10009msec 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.756 00:30:43.756 real 0m11.150s 00:30:43.756 user 0m17.395s 00:30:43.756 sys 0m1.804s 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:43.756 ************************************ 00:30:43.756 END TEST fio_dif_1_default 00:30:43.756 ************************************ 00:30:43.756 11:57:10 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:43.756 11:57:10 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:43.756 11:57:10 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:43.756 11:57:10 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:43.756 11:57:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:43.756 ************************************ 00:30:43.756 START TEST fio_dif_1_multi_subsystems 00:30:43.756 ************************************ 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:43.756 bdev_null0 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:43.756 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:43.757 [2024-07-15 11:57:10.383004] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:43.757 bdev_null1 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:43.757 { 00:30:43.757 "params": { 00:30:43.757 "name": "Nvme$subsystem", 00:30:43.757 "trtype": "$TEST_TRANSPORT", 00:30:43.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:43.757 "adrfam": "ipv4", 00:30:43.757 "trsvcid": "$NVMF_PORT", 00:30:43.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:43.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:43.757 "hdgst": ${hdgst:-false}, 00:30:43.757 "ddgst": ${ddgst:-false} 00:30:43.757 }, 00:30:43.757 "method": "bdev_nvme_attach_controller" 00:30:43.757 } 00:30:43.757 EOF 00:30:43.757 )") 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:43.757 { 00:30:43.757 "params": { 00:30:43.757 "name": "Nvme$subsystem", 00:30:43.757 "trtype": "$TEST_TRANSPORT", 00:30:43.757 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:43.757 "adrfam": "ipv4", 00:30:43.757 "trsvcid": "$NVMF_PORT", 00:30:43.757 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:43.757 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:43.757 "hdgst": ${hdgst:-false}, 00:30:43.757 "ddgst": ${ddgst:-false} 00:30:43.757 }, 00:30:43.757 "method": "bdev_nvme_attach_controller" 00:30:43.757 } 00:30:43.757 EOF 00:30:43.757 )") 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:43.757 "params": { 00:30:43.757 "name": "Nvme0", 00:30:43.757 "trtype": "tcp", 00:30:43.757 "traddr": "10.0.0.2", 00:30:43.757 "adrfam": "ipv4", 00:30:43.757 "trsvcid": "4420", 00:30:43.757 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:43.757 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:43.757 "hdgst": false, 00:30:43.757 "ddgst": false 00:30:43.757 }, 00:30:43.757 "method": "bdev_nvme_attach_controller" 00:30:43.757 },{ 00:30:43.757 "params": { 00:30:43.757 "name": "Nvme1", 00:30:43.757 "trtype": "tcp", 00:30:43.757 "traddr": "10.0.0.2", 00:30:43.757 "adrfam": "ipv4", 00:30:43.757 "trsvcid": "4420", 00:30:43.757 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:43.757 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:43.757 "hdgst": false, 00:30:43.757 "ddgst": false 00:30:43.757 }, 00:30:43.757 "method": "bdev_nvme_attach_controller" 00:30:43.757 }' 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:43.757 11:57:10 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.757 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:43.757 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:43.757 fio-3.35 00:30:43.757 Starting 2 threads 00:30:43.757 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.733 00:30:53.733 filename0: (groupid=0, jobs=1): err= 0: pid=2156196: Mon Jul 15 11:57:21 2024 00:30:53.733 read: IOPS=96, BW=384KiB/s (393kB/s)(3856KiB/10038msec) 00:30:53.733 slat (nsec): min=3934, max=24271, avg=7229.00, stdev=2421.39 00:30:53.733 clat (usec): min=40860, max=44883, avg=41629.42, stdev=558.94 00:30:53.733 lat (usec): min=40866, max=44895, avg=41636.65, stdev=558.94 00:30:53.733 clat percentiles (usec): 00:30:53.733 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:30:53.733 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:30:53.733 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:53.733 | 99.00th=[42730], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:30:53.733 | 99.99th=[44827] 00:30:53.733 bw ( KiB/s): min= 352, max= 416, per=33.88%, avg=384.00, stdev=10.38, samples=20 00:30:53.733 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:30:53.733 lat (msec) : 50=100.00% 00:30:53.733 cpu : usr=93.77%, sys=5.97%, ctx=18, majf=0, minf=109 00:30:53.733 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:53.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.733 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.733 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.733 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:53.733 filename1: (groupid=0, jobs=1): err= 0: pid=2156197: Mon Jul 15 11:57:21 2024 00:30:53.733 read: IOPS=187, BW=750KiB/s (768kB/s)(7520KiB/10028msec) 00:30:53.733 slat (nsec): min=2811, max=50523, avg=6644.66, stdev=2121.06 00:30:53.733 clat (usec): min=857, max=45007, avg=21317.01, stdev=20258.42 00:30:53.733 lat (usec): min=863, max=45017, avg=21323.65, stdev=20257.79 00:30:53.733 clat percentiles (usec): 00:30:53.733 | 1.00th=[ 889], 5.00th=[ 906], 10.00th=[ 955], 20.00th=[ 971], 00:30:53.733 | 30.00th=[ 979], 40.00th=[ 1029], 50.00th=[40633], 60.00th=[41157], 00:30:53.733 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:30:53.733 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:30:53.733 | 99.99th=[44827] 00:30:53.733 bw ( KiB/s): min= 670, max= 768, per=66.18%, avg=750.30, stdev=32.22, samples=20 00:30:53.733 iops : min= 167, max= 192, avg=187.55, stdev= 8.12, samples=20 00:30:53.733 lat (usec) : 1000=34.20% 00:30:53.733 lat (msec) : 2=15.59%, 50=50.21% 00:30:53.733 cpu : usr=92.89%, sys=6.86%, ctx=11, majf=0, minf=162 00:30:53.733 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:53.733 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.734 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.734 issued rwts: total=1880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.734 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:53.734 00:30:53.734 Run status group 0 (all jobs): 00:30:53.734 READ: bw=1133KiB/s (1160kB/s), 384KiB/s-750KiB/s (393kB/s-768kB/s), io=11.1MiB (11.6MB), run=10028-10038msec 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.992 00:30:53.992 real 0m11.586s 00:30:53.992 user 0m27.862s 00:30:53.992 sys 0m1.706s 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:53.992 11:57:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.992 ************************************ 00:30:53.992 END TEST fio_dif_1_multi_subsystems 00:30:53.992 ************************************ 00:30:53.992 11:57:21 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:30:53.992 11:57:21 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:30:53.992 11:57:21 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:53.992 11:57:21 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:53.992 11:57:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:53.992 ************************************ 00:30:53.992 START TEST fio_dif_rand_params 00:30:53.992 ************************************ 00:30:53.992 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:30:53.992 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:30:53.992 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:30:53.992 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:30:53.992 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:30:53.992 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:30:53.992 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:30:53.992 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:30:53.992 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:30:53.992 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:53.992 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:53.992 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:53.992 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:53.992 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:53.992 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.992 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.992 bdev_null0 00:30:53.992 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.992 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:53.992 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.992 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.992 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:53.993 [2024-07-15 11:57:22.057035] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.993 { 00:30:53.993 "params": { 00:30:53.993 "name": "Nvme$subsystem", 00:30:53.993 "trtype": "$TEST_TRANSPORT", 00:30:53.993 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.993 "adrfam": "ipv4", 00:30:53.993 "trsvcid": "$NVMF_PORT", 00:30:53.993 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.993 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.993 "hdgst": ${hdgst:-false}, 00:30:53.993 "ddgst": ${ddgst:-false} 00:30:53.993 }, 00:30:53.993 "method": "bdev_nvme_attach_controller" 00:30:53.993 } 00:30:53.993 EOF 00:30:53.993 )") 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:53.993 11:57:22 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:53.993 "params": { 00:30:53.993 "name": "Nvme0", 00:30:53.993 "trtype": "tcp", 00:30:53.993 "traddr": "10.0.0.2", 00:30:53.993 "adrfam": "ipv4", 00:30:53.993 "trsvcid": "4420", 00:30:53.993 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:53.993 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:53.993 "hdgst": false, 00:30:53.993 "ddgst": false 00:30:53.993 }, 00:30:53.993 "method": "bdev_nvme_attach_controller" 00:30:53.993 }' 00:30:54.326 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:54.326 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:54.326 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:54.326 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:54.326 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:54.326 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:54.326 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:54.326 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:54.326 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:54.326 11:57:22 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:54.585 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:54.585 ... 00:30:54.585 fio-3.35 00:30:54.585 Starting 3 threads 00:30:54.585 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.126 00:31:01.126 filename0: (groupid=0, jobs=1): err= 0: pid=2158189: Mon Jul 15 11:57:28 2024 00:31:01.126 read: IOPS=290, BW=36.3MiB/s (38.1MB/s)(182MiB/5004msec) 00:31:01.126 slat (nsec): min=4038, max=27875, avg=8922.69, stdev=2481.85 00:31:01.126 clat (usec): min=3958, max=55733, avg=10305.44, stdev=10922.58 00:31:01.126 lat (usec): min=3964, max=55746, avg=10314.36, stdev=10922.70 00:31:01.126 clat percentiles (usec): 00:31:01.126 | 1.00th=[ 4228], 5.00th=[ 4686], 10.00th=[ 5211], 20.00th=[ 5866], 00:31:01.126 | 30.00th=[ 6390], 40.00th=[ 6849], 50.00th=[ 7242], 60.00th=[ 7898], 00:31:01.126 | 70.00th=[ 8717], 80.00th=[ 9503], 90.00th=[10552], 95.00th=[48497], 00:31:01.126 | 99.00th=[51119], 99.50th=[51643], 99.90th=[54789], 99.95th=[55837], 00:31:01.126 | 99.99th=[55837] 00:31:01.126 bw ( KiB/s): min=25088, max=51968, per=37.94%, avg=38172.44, stdev=9361.91, samples=9 00:31:01.126 iops : min= 196, max= 406, avg=298.22, stdev=73.14, samples=9 00:31:01.126 lat (msec) : 4=0.07%, 10=85.91%, 20=7.01%, 50=4.40%, 100=2.61% 00:31:01.126 cpu : usr=90.95%, sys=8.67%, ctx=11, majf=0, minf=49 00:31:01.126 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.126 issued rwts: total=1455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.126 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:01.126 filename0: (groupid=0, jobs=1): err= 0: pid=2158190: Mon Jul 15 11:57:28 2024 00:31:01.126 read: IOPS=237, BW=29.7MiB/s (31.1MB/s)(149MiB/5021msec) 00:31:01.126 slat (nsec): min=5840, max=37043, avg=9490.13, stdev=2546.26 00:31:01.126 clat (usec): min=4105, max=92233, avg=12614.53, stdev=13773.66 00:31:01.126 lat (usec): min=4112, max=92245, avg=12624.02, stdev=13773.88 00:31:01.126 clat percentiles (usec): 00:31:01.126 | 1.00th=[ 4555], 5.00th=[ 5080], 10.00th=[ 5735], 20.00th=[ 6587], 00:31:01.126 | 30.00th=[ 7046], 40.00th=[ 7570], 50.00th=[ 8225], 60.00th=[ 8717], 00:31:01.126 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[47973], 95.00th=[50070], 00:31:01.126 | 99.00th=[51643], 99.50th=[52691], 99.90th=[90702], 99.95th=[91751], 00:31:01.126 | 99.99th=[91751] 00:31:01.126 bw ( KiB/s): min=18432, max=49408, per=30.28%, avg=30470.70, stdev=8876.66, samples=10 00:31:01.126 iops : min= 144, max= 386, avg=238.00, stdev=69.33, samples=10 00:31:01.126 lat (msec) : 10=78.96%, 20=10.06%, 50=5.87%, 100=5.11% 00:31:01.126 cpu : usr=90.96%, sys=8.71%, ctx=9, majf=0, minf=92 00:31:01.126 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.126 issued rwts: total=1193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.126 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:01.126 filename0: (groupid=0, jobs=1): err= 0: pid=2158191: Mon Jul 15 11:57:28 2024 00:31:01.126 read: IOPS=259, BW=32.5MiB/s (34.0MB/s)(162MiB/5002msec) 00:31:01.126 slat (nsec): min=5904, max=33251, avg=9005.77, stdev=2568.12 00:31:01.126 clat (usec): min=3650, max=54137, avg=11537.76, stdev=12627.18 00:31:01.126 lat (usec): min=3656, max=54162, avg=11546.77, stdev=12627.53 00:31:01.126 clat percentiles (usec): 00:31:01.126 | 1.00th=[ 4228], 5.00th=[ 4555], 10.00th=[ 4883], 20.00th=[ 5604], 00:31:01.126 | 30.00th=[ 6521], 40.00th=[ 7046], 50.00th=[ 7635], 60.00th=[ 8356], 00:31:01.126 | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[11731], 95.00th=[49546], 00:31:01.126 | 99.00th=[51643], 99.50th=[52167], 99.90th=[53216], 99.95th=[54264], 00:31:01.126 | 99.99th=[54264] 00:31:01.126 bw ( KiB/s): min=21461, max=51456, per=32.39%, avg=32592.56, stdev=10890.76, samples=9 00:31:01.126 iops : min= 167, max= 402, avg=254.56, stdev=85.17, samples=9 00:31:01.126 lat (msec) : 4=0.31%, 10=81.76%, 20=8.24%, 50=5.54%, 100=4.16% 00:31:01.126 cpu : usr=92.10%, sys=7.54%, ctx=8, majf=0, minf=117 00:31:01.126 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:01.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:01.126 issued rwts: total=1299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:01.126 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:01.126 00:31:01.126 Run status group 0 (all jobs): 00:31:01.126 READ: bw=98.3MiB/s (103MB/s), 29.7MiB/s-36.3MiB/s (31.1MB/s-38.1MB/s), io=493MiB (517MB), run=5002-5021msec 00:31:01.126 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:01.126 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:01.126 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:01.126 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.127 bdev_null0 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.127 [2024-07-15 11:57:28.245427] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.127 bdev_null1 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.127 bdev_null2 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:01.127 { 00:31:01.127 "params": { 00:31:01.127 "name": "Nvme$subsystem", 00:31:01.127 "trtype": "$TEST_TRANSPORT", 00:31:01.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:01.127 "adrfam": "ipv4", 00:31:01.127 "trsvcid": "$NVMF_PORT", 00:31:01.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:01.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:01.127 "hdgst": ${hdgst:-false}, 00:31:01.127 "ddgst": ${ddgst:-false} 00:31:01.127 }, 00:31:01.127 "method": "bdev_nvme_attach_controller" 00:31:01.127 } 00:31:01.127 EOF 00:31:01.127 )") 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:01.127 { 00:31:01.127 "params": { 00:31:01.127 "name": "Nvme$subsystem", 00:31:01.127 "trtype": "$TEST_TRANSPORT", 00:31:01.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:01.127 "adrfam": "ipv4", 00:31:01.127 "trsvcid": "$NVMF_PORT", 00:31:01.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:01.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:01.127 "hdgst": ${hdgst:-false}, 00:31:01.127 "ddgst": ${ddgst:-false} 00:31:01.127 }, 00:31:01.127 "method": "bdev_nvme_attach_controller" 00:31:01.127 } 00:31:01.127 EOF 00:31:01.127 )") 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:01.127 11:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:01.127 { 00:31:01.127 "params": { 00:31:01.127 "name": "Nvme$subsystem", 00:31:01.127 "trtype": "$TEST_TRANSPORT", 00:31:01.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:01.128 "adrfam": "ipv4", 00:31:01.128 "trsvcid": "$NVMF_PORT", 00:31:01.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:01.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:01.128 "hdgst": ${hdgst:-false}, 00:31:01.128 "ddgst": ${ddgst:-false} 00:31:01.128 }, 00:31:01.128 "method": "bdev_nvme_attach_controller" 00:31:01.128 } 00:31:01.128 EOF 00:31:01.128 )") 00:31:01.128 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:01.128 11:57:28 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:01.128 11:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:01.128 11:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:01.128 11:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:01.128 11:57:28 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:01.128 "params": { 00:31:01.128 "name": "Nvme0", 00:31:01.128 "trtype": "tcp", 00:31:01.128 "traddr": "10.0.0.2", 00:31:01.128 "adrfam": "ipv4", 00:31:01.128 "trsvcid": "4420", 00:31:01.128 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:01.128 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:01.128 "hdgst": false, 00:31:01.128 "ddgst": false 00:31:01.128 }, 00:31:01.128 "method": "bdev_nvme_attach_controller" 00:31:01.128 },{ 00:31:01.128 "params": { 00:31:01.128 "name": "Nvme1", 00:31:01.128 "trtype": "tcp", 00:31:01.128 "traddr": "10.0.0.2", 00:31:01.128 "adrfam": "ipv4", 00:31:01.128 "trsvcid": "4420", 00:31:01.128 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:01.128 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:01.128 "hdgst": false, 00:31:01.128 "ddgst": false 00:31:01.128 }, 00:31:01.128 "method": "bdev_nvme_attach_controller" 00:31:01.128 },{ 00:31:01.128 "params": { 00:31:01.128 "name": "Nvme2", 00:31:01.128 "trtype": "tcp", 00:31:01.128 "traddr": "10.0.0.2", 00:31:01.128 "adrfam": "ipv4", 00:31:01.128 "trsvcid": "4420", 00:31:01.128 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:01.128 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:01.128 "hdgst": false, 00:31:01.128 "ddgst": false 00:31:01.128 }, 00:31:01.128 "method": "bdev_nvme_attach_controller" 00:31:01.128 }' 00:31:01.128 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:01.128 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:01.128 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:01.128 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.128 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:01.128 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:01.128 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:01.128 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:01.128 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:01.128 11:57:28 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.128 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:01.128 ... 00:31:01.128 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:01.128 ... 00:31:01.128 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:01.128 ... 00:31:01.128 fio-3.35 00:31:01.128 Starting 24 threads 00:31:01.128 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.322 00:31:13.322 filename0: (groupid=0, jobs=1): err= 0: pid=2159381: Mon Jul 15 11:57:39 2024 00:31:13.322 read: IOPS=648, BW=2593KiB/s (2656kB/s)(25.4MiB/10010msec) 00:31:13.322 slat (nsec): min=6357, max=73427, avg=12846.40, stdev=6684.09 00:31:13.322 clat (usec): min=5312, max=47977, avg=24574.12, stdev=3984.33 00:31:13.322 lat (usec): min=5327, max=47995, avg=24586.97, stdev=3985.54 00:31:13.322 clat percentiles (usec): 00:31:13.322 | 1.00th=[ 9634], 5.00th=[15270], 10.00th=[19006], 20.00th=[24249], 00:31:13.322 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:31:13.322 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26870], 95.00th=[27395], 00:31:13.322 | 99.00th=[31589], 99.50th=[39584], 99.90th=[46924], 99.95th=[46924], 00:31:13.322 | 99.99th=[47973] 00:31:13.322 bw ( KiB/s): min= 2432, max= 3264, per=4.44%, avg=2589.60, stdev=187.83, samples=20 00:31:13.322 iops : min= 608, max= 816, avg=647.40, stdev=46.96, samples=20 00:31:13.322 lat (msec) : 10=1.05%, 20=9.89%, 50=89.06% 00:31:13.322 cpu : usr=97.37%, sys=2.24%, ctx=21, majf=0, minf=41 00:31:13.322 IO depths : 1=5.0%, 2=10.3%, 4=22.2%, 8=54.7%, 16=7.8%, 32=0.0%, >=64=0.0% 00:31:13.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 complete : 0=0.0%, 4=93.5%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 issued rwts: total=6490,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.322 filename0: (groupid=0, jobs=1): err= 0: pid=2159382: Mon Jul 15 11:57:39 2024 00:31:13.322 read: IOPS=616, BW=2464KiB/s (2523kB/s)(24.1MiB/10015msec) 00:31:13.322 slat (nsec): min=6278, max=71104, avg=17083.49, stdev=9111.24 00:31:13.322 clat (usec): min=10621, max=61986, avg=25843.23, stdev=4281.65 00:31:13.322 lat (usec): min=10631, max=62006, avg=25860.31, stdev=4282.58 00:31:13.322 clat percentiles (usec): 00:31:13.322 | 1.00th=[13042], 5.00th=[18482], 10.00th=[23200], 20.00th=[24773], 00:31:13.322 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:31:13.322 | 70.00th=[26346], 80.00th=[26608], 90.00th=[28443], 95.00th=[33817], 00:31:13.322 | 99.00th=[41157], 99.50th=[46400], 99.90th=[48497], 99.95th=[61604], 00:31:13.322 | 99.99th=[62129] 00:31:13.322 bw ( KiB/s): min= 2272, max= 2656, per=4.22%, avg=2461.10, stdev=86.01, samples=20 00:31:13.322 iops : min= 568, max= 664, avg=615.20, stdev=21.51, samples=20 00:31:13.322 lat (msec) : 20=6.55%, 50=93.37%, 100=0.08% 00:31:13.322 cpu : usr=97.07%, sys=2.49%, ctx=17, majf=0, minf=42 00:31:13.322 IO depths : 1=2.3%, 2=5.0%, 4=15.3%, 8=66.3%, 16=11.1%, 32=0.0%, >=64=0.0% 00:31:13.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 complete : 0=0.0%, 4=91.9%, 8=3.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 issued rwts: total=6170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.322 filename0: (groupid=0, jobs=1): err= 0: pid=2159383: Mon Jul 15 11:57:39 2024 00:31:13.322 read: IOPS=619, BW=2478KiB/s (2538kB/s)(24.2MiB/10010msec) 00:31:13.322 slat (nsec): min=6245, max=81892, avg=23951.21, stdev=11836.34 00:31:13.322 clat (usec): min=9436, max=68489, avg=25622.52, stdev=2834.52 00:31:13.322 lat (usec): min=9450, max=68510, avg=25646.47, stdev=2835.17 00:31:13.322 clat percentiles (usec): 00:31:13.322 | 1.00th=[15795], 5.00th=[23462], 10.00th=[24249], 20.00th=[24773], 00:31:13.322 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25560], 60.00th=[25822], 00:31:13.322 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26870], 95.00th=[27657], 00:31:13.322 | 99.00th=[35914], 99.50th=[40633], 99.90th=[52691], 99.95th=[52691], 00:31:13.322 | 99.99th=[68682] 00:31:13.322 bw ( KiB/s): min= 2272, max= 2704, per=4.24%, avg=2473.85, stdev=108.14, samples=20 00:31:13.322 iops : min= 568, max= 676, avg=618.40, stdev=27.04, samples=20 00:31:13.322 lat (msec) : 10=0.02%, 20=2.90%, 50=96.82%, 100=0.26% 00:31:13.322 cpu : usr=97.09%, sys=2.49%, ctx=17, majf=0, minf=38 00:31:13.322 IO depths : 1=5.4%, 2=10.7%, 4=22.4%, 8=54.3%, 16=7.3%, 32=0.0%, >=64=0.0% 00:31:13.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 issued rwts: total=6202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.322 filename0: (groupid=0, jobs=1): err= 0: pid=2159384: Mon Jul 15 11:57:39 2024 00:31:13.322 read: IOPS=621, BW=2486KiB/s (2546kB/s)(24.3MiB/10009msec) 00:31:13.322 slat (nsec): min=6305, max=75180, avg=17588.93, stdev=10252.26 00:31:13.322 clat (usec): min=11341, max=57084, avg=25616.39, stdev=4308.81 00:31:13.322 lat (usec): min=11364, max=57104, avg=25633.98, stdev=4309.56 00:31:13.322 clat percentiles (usec): 00:31:13.322 | 1.00th=[14353], 5.00th=[17695], 10.00th=[20841], 20.00th=[24249], 00:31:13.322 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[26084], 00:31:13.322 | 70.00th=[26346], 80.00th=[26608], 90.00th=[28181], 95.00th=[33162], 00:31:13.322 | 99.00th=[40109], 99.50th=[42730], 99.90th=[53216], 99.95th=[56886], 00:31:13.322 | 99.99th=[56886] 00:31:13.322 bw ( KiB/s): min= 2176, max= 2672, per=4.25%, avg=2477.89, stdev=121.26, samples=19 00:31:13.322 iops : min= 544, max= 668, avg=619.47, stdev=30.32, samples=19 00:31:13.322 lat (msec) : 20=8.41%, 50=91.34%, 100=0.26% 00:31:13.322 cpu : usr=97.26%, sys=2.33%, ctx=24, majf=0, minf=51 00:31:13.322 IO depths : 1=2.5%, 2=5.1%, 4=14.7%, 8=66.8%, 16=11.0%, 32=0.0%, >=64=0.0% 00:31:13.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 complete : 0=0.0%, 4=91.6%, 8=3.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 issued rwts: total=6221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.322 filename0: (groupid=0, jobs=1): err= 0: pid=2159385: Mon Jul 15 11:57:39 2024 00:31:13.322 read: IOPS=633, BW=2533KiB/s (2594kB/s)(24.8MiB/10007msec) 00:31:13.322 slat (nsec): min=6407, max=70779, avg=14977.09, stdev=8417.87 00:31:13.322 clat (usec): min=7506, max=46384, avg=25143.57, stdev=3006.17 00:31:13.322 lat (usec): min=7514, max=46417, avg=25158.55, stdev=3007.13 00:31:13.322 clat percentiles (usec): 00:31:13.322 | 1.00th=[14615], 5.00th=[18482], 10.00th=[23462], 20.00th=[24773], 00:31:13.322 | 30.00th=[25297], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:31:13.322 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26870], 95.00th=[27132], 00:31:13.322 | 99.00th=[31851], 99.50th=[34341], 99.90th=[46400], 99.95th=[46400], 00:31:13.322 | 99.99th=[46400] 00:31:13.322 bw ( KiB/s): min= 2304, max= 3200, per=4.34%, avg=2533.63, stdev=195.03, samples=19 00:31:13.322 iops : min= 576, max= 800, avg=633.37, stdev=48.78, samples=19 00:31:13.322 lat (msec) : 10=0.41%, 20=6.45%, 50=93.14% 00:31:13.322 cpu : usr=96.99%, sys=2.59%, ctx=22, majf=0, minf=63 00:31:13.322 IO depths : 1=5.3%, 2=10.6%, 4=22.0%, 8=54.5%, 16=7.6%, 32=0.0%, >=64=0.0% 00:31:13.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 complete : 0=0.0%, 4=93.5%, 8=1.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 issued rwts: total=6338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.322 filename0: (groupid=0, jobs=1): err= 0: pid=2159386: Mon Jul 15 11:57:39 2024 00:31:13.322 read: IOPS=628, BW=2516KiB/s (2576kB/s)(24.6MiB/10007msec) 00:31:13.322 slat (nsec): min=6266, max=71341, avg=21454.69, stdev=11086.93 00:31:13.322 clat (usec): min=8776, max=46562, avg=25267.10, stdev=2688.35 00:31:13.322 lat (usec): min=8785, max=46582, avg=25288.55, stdev=2689.79 00:31:13.322 clat percentiles (usec): 00:31:13.322 | 1.00th=[14877], 5.00th=[20055], 10.00th=[23725], 20.00th=[24773], 00:31:13.322 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:31:13.322 | 70.00th=[26084], 80.00th=[26346], 90.00th=[26870], 95.00th=[27132], 00:31:13.322 | 99.00th=[30802], 99.50th=[33817], 99.90th=[46400], 99.95th=[46400], 00:31:13.322 | 99.99th=[46400] 00:31:13.322 bw ( KiB/s): min= 2304, max= 2864, per=4.31%, avg=2515.11, stdev=128.80, samples=19 00:31:13.322 iops : min= 576, max= 716, avg=628.74, stdev=32.23, samples=19 00:31:13.322 lat (msec) : 10=0.10%, 20=4.72%, 50=95.19% 00:31:13.322 cpu : usr=97.29%, sys=2.31%, ctx=16, majf=0, minf=46 00:31:13.322 IO depths : 1=5.2%, 2=10.4%, 4=21.6%, 8=54.9%, 16=7.8%, 32=0.0%, >=64=0.0% 00:31:13.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 complete : 0=0.0%, 4=93.4%, 8=1.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 issued rwts: total=6294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.322 filename0: (groupid=0, jobs=1): err= 0: pid=2159387: Mon Jul 15 11:57:39 2024 00:31:13.322 read: IOPS=609, BW=2438KiB/s (2497kB/s)(23.8MiB/10008msec) 00:31:13.322 slat (nsec): min=5034, max=76353, avg=18731.24, stdev=10528.63 00:31:13.322 clat (usec): min=8240, max=51399, avg=26121.11, stdev=4036.93 00:31:13.322 lat (usec): min=8253, max=51414, avg=26139.85, stdev=4037.00 00:31:13.322 clat percentiles (usec): 00:31:13.322 | 1.00th=[15008], 5.00th=[21103], 10.00th=[23987], 20.00th=[24773], 00:31:13.322 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:31:13.322 | 70.00th=[26346], 80.00th=[26870], 90.00th=[28181], 95.00th=[33162], 00:31:13.322 | 99.00th=[42730], 99.50th=[44827], 99.90th=[51119], 99.95th=[51643], 00:31:13.322 | 99.99th=[51643] 00:31:13.322 bw ( KiB/s): min= 2180, max= 2576, per=4.15%, avg=2422.11, stdev=90.34, samples=19 00:31:13.322 iops : min= 545, max= 644, avg=605.53, stdev=22.58, samples=19 00:31:13.322 lat (msec) : 10=0.21%, 20=4.10%, 50=95.43%, 100=0.26% 00:31:13.322 cpu : usr=97.49%, sys=2.12%, ctx=14, majf=0, minf=55 00:31:13.322 IO depths : 1=2.2%, 2=4.5%, 4=13.5%, 8=67.7%, 16=12.1%, 32=0.0%, >=64=0.0% 00:31:13.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 complete : 0=0.0%, 4=91.8%, 8=4.4%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 issued rwts: total=6100,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.322 filename0: (groupid=0, jobs=1): err= 0: pid=2159388: Mon Jul 15 11:57:39 2024 00:31:13.322 read: IOPS=631, BW=2524KiB/s (2585kB/s)(24.7MiB/10005msec) 00:31:13.322 slat (nsec): min=6268, max=70484, avg=19453.01, stdev=10285.76 00:31:13.322 clat (usec): min=7615, max=45991, avg=25202.87, stdev=3519.12 00:31:13.322 lat (usec): min=7626, max=46012, avg=25222.33, stdev=3520.72 00:31:13.322 clat percentiles (usec): 00:31:13.322 | 1.00th=[11338], 5.00th=[17695], 10.00th=[22152], 20.00th=[24773], 00:31:13.322 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:31:13.322 | 70.00th=[26084], 80.00th=[26346], 90.00th=[27132], 95.00th=[28181], 00:31:13.322 | 99.00th=[36439], 99.50th=[39584], 99.90th=[44303], 99.95th=[44303], 00:31:13.322 | 99.99th=[45876] 00:31:13.322 bw ( KiB/s): min= 2400, max= 2816, per=4.30%, avg=2508.05, stdev=96.43, samples=19 00:31:13.322 iops : min= 600, max= 704, avg=627.00, stdev=24.10, samples=19 00:31:13.322 lat (msec) : 10=0.48%, 20=7.21%, 50=92.32% 00:31:13.322 cpu : usr=96.78%, sys=2.78%, ctx=14, majf=0, minf=38 00:31:13.322 IO depths : 1=4.1%, 2=8.5%, 4=20.7%, 8=58.0%, 16=8.8%, 32=0.0%, >=64=0.0% 00:31:13.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 complete : 0=0.0%, 4=93.4%, 8=1.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 issued rwts: total=6314,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.322 filename1: (groupid=0, jobs=1): err= 0: pid=2159389: Mon Jul 15 11:57:39 2024 00:31:13.322 read: IOPS=610, BW=2443KiB/s (2501kB/s)(23.9MiB/10008msec) 00:31:13.322 slat (nsec): min=6262, max=70101, avg=21701.93, stdev=10455.30 00:31:13.322 clat (usec): min=10108, max=56126, avg=26028.08, stdev=3232.52 00:31:13.322 lat (usec): min=10115, max=56146, avg=26049.78, stdev=3232.23 00:31:13.322 clat percentiles (usec): 00:31:13.322 | 1.00th=[16057], 5.00th=[22938], 10.00th=[24249], 20.00th=[25035], 00:31:13.322 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:31:13.322 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27395], 95.00th=[30278], 00:31:13.322 | 99.00th=[39060], 99.50th=[41681], 99.90th=[52167], 99.95th=[55837], 00:31:13.322 | 99.99th=[56361] 00:31:13.322 bw ( KiB/s): min= 2180, max= 2560, per=4.17%, avg=2432.21, stdev=80.89, samples=19 00:31:13.322 iops : min= 545, max= 640, avg=608.05, stdev=20.22, samples=19 00:31:13.322 lat (msec) : 20=2.19%, 50=97.53%, 100=0.28% 00:31:13.322 cpu : usr=97.25%, sys=2.34%, ctx=15, majf=0, minf=31 00:31:13.322 IO depths : 1=3.1%, 2=6.6%, 4=18.3%, 8=62.3%, 16=9.7%, 32=0.0%, >=64=0.0% 00:31:13.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 complete : 0=0.0%, 4=92.7%, 8=1.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 issued rwts: total=6112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.322 filename1: (groupid=0, jobs=1): err= 0: pid=2159390: Mon Jul 15 11:57:39 2024 00:31:13.322 read: IOPS=608, BW=2435KiB/s (2493kB/s)(23.8MiB/10005msec) 00:31:13.322 slat (nsec): min=5367, max=75734, avg=18143.29, stdev=10532.44 00:31:13.322 clat (usec): min=7380, max=53521, avg=26184.73, stdev=4504.45 00:31:13.322 lat (usec): min=7398, max=53535, avg=26202.88, stdev=4504.29 00:31:13.322 clat percentiles (usec): 00:31:13.322 | 1.00th=[11469], 5.00th=[20055], 10.00th=[23725], 20.00th=[24773], 00:31:13.322 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:31:13.322 | 70.00th=[26346], 80.00th=[26870], 90.00th=[29754], 95.00th=[34341], 00:31:13.322 | 99.00th=[42730], 99.50th=[45351], 99.90th=[49021], 99.95th=[53216], 00:31:13.322 | 99.99th=[53740] 00:31:13.322 bw ( KiB/s): min= 2176, max= 2608, per=4.15%, avg=2422.74, stdev=104.28, samples=19 00:31:13.322 iops : min= 544, max= 652, avg=605.68, stdev=26.07, samples=19 00:31:13.322 lat (msec) : 10=0.56%, 20=4.50%, 50=94.86%, 100=0.08% 00:31:13.322 cpu : usr=97.03%, sys=2.57%, ctx=20, majf=0, minf=55 00:31:13.322 IO depths : 1=0.5%, 2=1.4%, 4=10.3%, 8=73.9%, 16=13.9%, 32=0.0%, >=64=0.0% 00:31:13.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 complete : 0=0.0%, 4=91.2%, 8=5.0%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 issued rwts: total=6090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.322 filename1: (groupid=0, jobs=1): err= 0: pid=2159391: Mon Jul 15 11:57:39 2024 00:31:13.322 read: IOPS=604, BW=2419KiB/s (2477kB/s)(23.6MiB/10003msec) 00:31:13.322 slat (nsec): min=5984, max=60816, avg=17405.96, stdev=9329.37 00:31:13.322 clat (usec): min=3370, max=47710, avg=26355.40, stdev=4358.64 00:31:13.322 lat (usec): min=3377, max=47724, avg=26372.81, stdev=4358.44 00:31:13.322 clat percentiles (usec): 00:31:13.322 | 1.00th=[12518], 5.00th=[21627], 10.00th=[24249], 20.00th=[25035], 00:31:13.322 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[26346], 00:31:13.322 | 70.00th=[26608], 80.00th=[26870], 90.00th=[30278], 95.00th=[35390], 00:31:13.322 | 99.00th=[42730], 99.50th=[44827], 99.90th=[45876], 99.95th=[45876], 00:31:13.322 | 99.99th=[47973] 00:31:13.322 bw ( KiB/s): min= 2096, max= 2560, per=4.12%, avg=2403.58, stdev=108.59, samples=19 00:31:13.322 iops : min= 524, max= 640, avg=600.89, stdev=27.15, samples=19 00:31:13.322 lat (msec) : 4=0.10%, 10=0.53%, 20=3.44%, 50=95.93% 00:31:13.322 cpu : usr=97.05%, sys=2.52%, ctx=20, majf=0, minf=53 00:31:13.322 IO depths : 1=0.5%, 2=1.8%, 4=10.6%, 8=72.9%, 16=14.2%, 32=0.0%, >=64=0.0% 00:31:13.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 complete : 0=0.0%, 4=91.4%, 8=4.6%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 issued rwts: total=6050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.322 filename1: (groupid=0, jobs=1): err= 0: pid=2159392: Mon Jul 15 11:57:39 2024 00:31:13.322 read: IOPS=626, BW=2507KiB/s (2567kB/s)(24.5MiB/10010msec) 00:31:13.322 slat (nsec): min=6236, max=78066, avg=20051.51, stdev=10888.30 00:31:13.322 clat (usec): min=11828, max=52949, avg=25367.92, stdev=3674.42 00:31:13.322 lat (usec): min=11841, max=52969, avg=25387.97, stdev=3675.42 00:31:13.322 clat percentiles (usec): 00:31:13.322 | 1.00th=[14222], 5.00th=[17433], 10.00th=[23200], 20.00th=[24773], 00:31:13.322 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:31:13.322 | 70.00th=[26084], 80.00th=[26608], 90.00th=[27132], 95.00th=[28443], 00:31:13.322 | 99.00th=[37487], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:31:13.322 | 99.99th=[52691] 00:31:13.322 bw ( KiB/s): min= 2256, max= 2768, per=4.29%, avg=2502.65, stdev=112.02, samples=20 00:31:13.322 iops : min= 564, max= 692, avg=625.60, stdev=28.02, samples=20 00:31:13.322 lat (msec) : 20=8.24%, 50=91.71%, 100=0.05% 00:31:13.322 cpu : usr=97.30%, sys=2.28%, ctx=20, majf=0, minf=39 00:31:13.322 IO depths : 1=4.2%, 2=8.9%, 4=21.1%, 8=57.1%, 16=8.7%, 32=0.0%, >=64=0.0% 00:31:13.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 issued rwts: total=6274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.322 filename1: (groupid=0, jobs=1): err= 0: pid=2159393: Mon Jul 15 11:57:39 2024 00:31:13.322 read: IOPS=634, BW=2536KiB/s (2597kB/s)(24.8MiB/10008msec) 00:31:13.322 slat (nsec): min=6365, max=73779, avg=14613.31, stdev=7460.38 00:31:13.322 clat (usec): min=4510, max=50069, avg=25122.79, stdev=4056.84 00:31:13.322 lat (usec): min=4519, max=50077, avg=25137.40, stdev=4057.97 00:31:13.322 clat percentiles (usec): 00:31:13.322 | 1.00th=[10552], 5.00th=[16712], 10.00th=[22152], 20.00th=[24511], 00:31:13.322 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25560], 60.00th=[26084], 00:31:13.322 | 70.00th=[26084], 80.00th=[26346], 90.00th=[27132], 95.00th=[27919], 00:31:13.322 | 99.00th=[38011], 99.50th=[41157], 99.90th=[45876], 99.95th=[46924], 00:31:13.322 | 99.99th=[50070] 00:31:13.322 bw ( KiB/s): min= 2400, max= 2768, per=4.35%, avg=2537.26, stdev=94.10, samples=19 00:31:13.322 iops : min= 600, max= 692, avg=634.32, stdev=23.53, samples=19 00:31:13.322 lat (msec) : 10=0.72%, 20=7.60%, 50=91.65%, 100=0.03% 00:31:13.322 cpu : usr=97.28%, sys=2.28%, ctx=19, majf=0, minf=47 00:31:13.322 IO depths : 1=3.8%, 2=8.1%, 4=21.5%, 8=57.5%, 16=9.1%, 32=0.0%, >=64=0.0% 00:31:13.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.322 issued rwts: total=6346,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.322 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.322 filename1: (groupid=0, jobs=1): err= 0: pid=2159394: Mon Jul 15 11:57:39 2024 00:31:13.322 read: IOPS=601, BW=2408KiB/s (2465kB/s)(23.5MiB/10007msec) 00:31:13.322 slat (nsec): min=6301, max=76379, avg=19653.37, stdev=10151.54 00:31:13.322 clat (usec): min=7257, max=50008, avg=26434.35, stdev=4466.02 00:31:13.322 lat (usec): min=7264, max=50015, avg=26454.01, stdev=4466.70 00:31:13.322 clat percentiles (usec): 00:31:13.323 | 1.00th=[14222], 5.00th=[20579], 10.00th=[24249], 20.00th=[25035], 00:31:13.323 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:31:13.323 | 70.00th=[26346], 80.00th=[26870], 90.00th=[30802], 95.00th=[35914], 00:31:13.323 | 99.00th=[42730], 99.50th=[44827], 99.90th=[47973], 99.95th=[47973], 00:31:13.323 | 99.99th=[50070] 00:31:13.323 bw ( KiB/s): min= 2200, max= 2640, per=4.12%, avg=2401.00, stdev=96.24, samples=19 00:31:13.323 iops : min= 550, max= 660, avg=600.21, stdev=24.06, samples=19 00:31:13.323 lat (msec) : 10=0.25%, 20=4.42%, 50=95.32%, 100=0.02% 00:31:13.323 cpu : usr=97.16%, sys=2.42%, ctx=14, majf=0, minf=33 00:31:13.323 IO depths : 1=3.5%, 2=7.4%, 4=21.0%, 8=58.8%, 16=9.2%, 32=0.0%, >=64=0.0% 00:31:13.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.323 complete : 0=0.0%, 4=93.5%, 8=1.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.323 issued rwts: total=6023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.323 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.323 filename1: (groupid=0, jobs=1): err= 0: pid=2159395: Mon Jul 15 11:57:39 2024 00:31:13.323 read: IOPS=618, BW=2476KiB/s (2535kB/s)(24.2MiB/10010msec) 00:31:13.323 slat (nsec): min=6303, max=75024, avg=20964.94, stdev=11232.85 00:31:13.323 clat (usec): min=7633, max=61551, avg=25694.07, stdev=3527.08 00:31:13.323 lat (usec): min=7642, max=61571, avg=25715.04, stdev=3527.98 00:31:13.323 clat percentiles (usec): 00:31:13.323 | 1.00th=[15008], 5.00th=[20579], 10.00th=[23725], 20.00th=[24773], 00:31:13.323 | 30.00th=[25035], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:31:13.323 | 70.00th=[26084], 80.00th=[26608], 90.00th=[27132], 95.00th=[30278], 00:31:13.323 | 99.00th=[41157], 99.50th=[44303], 99.90th=[46400], 99.95th=[61080], 00:31:13.323 | 99.99th=[61604] 00:31:13.323 bw ( KiB/s): min= 2176, max= 2704, per=4.24%, avg=2471.50, stdev=103.58, samples=20 00:31:13.323 iops : min= 544, max= 676, avg=617.80, stdev=25.93, samples=20 00:31:13.323 lat (msec) : 10=0.16%, 20=4.16%, 50=95.59%, 100=0.08% 00:31:13.323 cpu : usr=97.20%, sys=2.38%, ctx=15, majf=0, minf=42 00:31:13.323 IO depths : 1=3.2%, 2=6.7%, 4=17.6%, 8=62.7%, 16=9.7%, 32=0.0%, >=64=0.0% 00:31:13.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.323 complete : 0=0.0%, 4=92.3%, 8=2.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.323 issued rwts: total=6196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.323 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.323 filename1: (groupid=0, jobs=1): err= 0: pid=2159396: Mon Jul 15 11:57:39 2024 00:31:13.323 read: IOPS=592, BW=2370KiB/s (2427kB/s)(23.2MiB/10003msec) 00:31:13.323 slat (usec): min=6, max=105, avg=19.46, stdev=11.97 00:31:13.323 clat (usec): min=2930, max=49893, avg=26912.04, stdev=4575.55 00:31:13.323 lat (usec): min=2942, max=49912, avg=26931.51, stdev=4574.53 00:31:13.323 clat percentiles (usec): 00:31:13.323 | 1.00th=[15008], 5.00th=[22414], 10.00th=[24249], 20.00th=[25035], 00:31:13.323 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:31:13.323 | 70.00th=[26608], 80.00th=[27395], 90.00th=[32900], 95.00th=[37487], 00:31:13.323 | 99.00th=[42730], 99.50th=[43779], 99.90th=[46924], 99.95th=[49546], 00:31:13.323 | 99.99th=[50070] 00:31:13.323 bw ( KiB/s): min= 1824, max= 2560, per=4.04%, avg=2358.53, stdev=155.11, samples=19 00:31:13.323 iops : min= 456, max= 640, avg=589.63, stdev=38.78, samples=19 00:31:13.323 lat (msec) : 4=0.07%, 10=0.27%, 20=2.89%, 50=96.78% 00:31:13.323 cpu : usr=97.19%, sys=2.39%, ctx=33, majf=0, minf=40 00:31:13.323 IO depths : 1=0.3%, 2=0.6%, 4=6.6%, 8=76.8%, 16=15.8%, 32=0.0%, >=64=0.0% 00:31:13.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.323 complete : 0=0.0%, 4=90.5%, 8=7.2%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.323 issued rwts: total=5927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.323 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.323 filename2: (groupid=0, jobs=1): err= 0: pid=2159397: Mon Jul 15 11:57:39 2024 00:31:13.323 read: IOPS=581, BW=2324KiB/s (2380kB/s)(22.7MiB/10005msec) 00:31:13.323 slat (nsec): min=5341, max=70243, avg=15895.47, stdev=8872.41 00:31:13.323 clat (usec): min=6185, max=65655, avg=27445.58, stdev=5420.24 00:31:13.323 lat (usec): min=6193, max=65669, avg=27461.47, stdev=5420.45 00:31:13.323 clat percentiles (usec): 00:31:13.323 | 1.00th=[15008], 5.00th=[19530], 10.00th=[23725], 20.00th=[25035], 00:31:13.323 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26608], 00:31:13.323 | 70.00th=[27132], 80.00th=[30278], 90.00th=[34866], 95.00th=[38011], 00:31:13.323 | 99.00th=[43779], 99.50th=[44827], 99.90th=[49546], 99.95th=[65799], 00:31:13.323 | 99.99th=[65799] 00:31:13.323 bw ( KiB/s): min= 1720, max= 2448, per=3.96%, avg=2308.21, stdev=162.24, samples=19 00:31:13.323 iops : min= 430, max= 612, avg=577.05, stdev=40.56, samples=19 00:31:13.323 lat (msec) : 10=0.31%, 20=5.21%, 50=94.39%, 100=0.09% 00:31:13.323 cpu : usr=96.93%, sys=2.68%, ctx=18, majf=0, minf=30 00:31:13.323 IO depths : 1=0.9%, 2=1.9%, 4=10.7%, 8=73.0%, 16=13.5%, 32=0.0%, >=64=0.0% 00:31:13.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.323 complete : 0=0.0%, 4=91.0%, 8=5.2%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.323 issued rwts: total=5813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.323 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.323 filename2: (groupid=0, jobs=1): err= 0: pid=2159398: Mon Jul 15 11:57:39 2024 00:31:13.323 read: IOPS=576, BW=2305KiB/s (2360kB/s)(22.5MiB/10004msec) 00:31:13.323 slat (nsec): min=6073, max=73427, avg=17907.71, stdev=10587.64 00:31:13.323 clat (usec): min=4982, max=50253, avg=27652.63, stdev=5254.94 00:31:13.323 lat (usec): min=4996, max=50270, avg=27670.54, stdev=5254.57 00:31:13.323 clat percentiles (usec): 00:31:13.323 | 1.00th=[13435], 5.00th=[22152], 10.00th=[24249], 20.00th=[25035], 00:31:13.323 | 30.00th=[25560], 40.00th=[25822], 50.00th=[26084], 60.00th=[26608], 00:31:13.323 | 70.00th=[27395], 80.00th=[31065], 90.00th=[35914], 95.00th=[38011], 00:31:13.323 | 99.00th=[44303], 99.50th=[44303], 99.90th=[46924], 99.95th=[50070], 00:31:13.323 | 99.99th=[50070] 00:31:13.323 bw ( KiB/s): min= 1808, max= 2533, per=3.91%, avg=2283.21, stdev=188.84, samples=19 00:31:13.323 iops : min= 452, max= 633, avg=570.79, stdev=47.19, samples=19 00:31:13.323 lat (msec) : 10=0.35%, 20=2.88%, 50=96.72%, 100=0.05% 00:31:13.323 cpu : usr=96.96%, sys=2.64%, ctx=21, majf=0, minf=54 00:31:13.323 IO depths : 1=1.4%, 2=2.9%, 4=13.0%, 8=70.1%, 16=12.6%, 32=0.0%, >=64=0.0% 00:31:13.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.323 complete : 0=0.0%, 4=91.8%, 8=4.0%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.323 issued rwts: total=5764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.323 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.323 filename2: (groupid=0, jobs=1): err= 0: pid=2159399: Mon Jul 15 11:57:39 2024 00:31:13.323 read: IOPS=623, BW=2496KiB/s (2556kB/s)(24.4MiB/10003msec) 00:31:13.323 slat (nsec): min=6277, max=70722, avg=18566.19, stdev=9546.61 00:31:13.323 clat (usec): min=4504, max=41139, avg=25502.16, stdev=2808.33 00:31:13.323 lat (usec): min=4518, max=41155, avg=25520.72, stdev=2809.60 00:31:13.323 clat percentiles (usec): 00:31:13.323 | 1.00th=[14222], 5.00th=[21103], 10.00th=[23987], 20.00th=[24773], 00:31:13.323 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:31:13.323 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27132], 95.00th=[28181], 00:31:13.323 | 99.00th=[34341], 99.50th=[36439], 99.90th=[41157], 99.95th=[41157], 00:31:13.323 | 99.99th=[41157] 00:31:13.323 bw ( KiB/s): min= 2432, max= 2712, per=4.27%, avg=2493.05, stdev=78.62, samples=19 00:31:13.323 iops : min= 608, max= 678, avg=623.26, stdev=19.65, samples=19 00:31:13.323 lat (msec) : 10=0.26%, 20=3.56%, 50=96.19% 00:31:13.323 cpu : usr=96.99%, sys=2.58%, ctx=23, majf=0, minf=30 00:31:13.323 IO depths : 1=4.2%, 2=9.0%, 4=22.0%, 8=56.4%, 16=8.4%, 32=0.0%, >=64=0.0% 00:31:13.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.323 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.323 issued rwts: total=6241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.323 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.323 filename2: (groupid=0, jobs=1): err= 0: pid=2159400: Mon Jul 15 11:57:39 2024 00:31:13.323 read: IOPS=599, BW=2400KiB/s (2457kB/s)(23.5MiB/10008msec) 00:31:13.323 slat (nsec): min=4938, max=69330, avg=17675.78, stdev=9553.18 00:31:13.323 clat (usec): min=6816, max=52131, avg=26547.36, stdev=4461.84 00:31:13.323 lat (usec): min=6828, max=52145, avg=26565.03, stdev=4461.92 00:31:13.323 clat percentiles (usec): 00:31:13.323 | 1.00th=[13566], 5.00th=[20317], 10.00th=[23987], 20.00th=[25035], 00:31:13.323 | 30.00th=[25297], 40.00th=[25560], 50.00th=[25822], 60.00th=[26346], 00:31:13.323 | 70.00th=[26608], 80.00th=[27395], 90.00th=[31851], 95.00th=[35390], 00:31:13.323 | 99.00th=[41157], 99.50th=[42730], 99.90th=[48497], 99.95th=[48497], 00:31:13.323 | 99.99th=[52167] 00:31:13.323 bw ( KiB/s): min= 2176, max= 2496, per=4.08%, avg=2381.05, stdev=83.36, samples=19 00:31:13.323 iops : min= 544, max= 624, avg=595.26, stdev=20.84, samples=19 00:31:13.323 lat (msec) : 10=0.32%, 20=4.18%, 50=95.45%, 100=0.05% 00:31:13.323 cpu : usr=97.17%, sys=2.43%, ctx=16, majf=0, minf=43 00:31:13.323 IO depths : 1=2.2%, 2=4.5%, 4=15.9%, 8=66.0%, 16=11.4%, 32=0.0%, >=64=0.0% 00:31:13.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.323 complete : 0=0.0%, 4=92.5%, 8=2.9%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.323 issued rwts: total=6004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.323 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.323 filename2: (groupid=0, jobs=1): err= 0: pid=2159401: Mon Jul 15 11:57:39 2024 00:31:13.323 read: IOPS=588, BW=2355KiB/s (2411kB/s)(23.0MiB/10013msec) 00:31:13.323 slat (nsec): min=6296, max=70441, avg=20874.93, stdev=10882.37 00:31:13.323 clat (usec): min=7421, max=75804, avg=27022.67, stdev=5782.62 00:31:13.323 lat (usec): min=7430, max=75850, avg=27043.54, stdev=5781.96 00:31:13.323 clat percentiles (usec): 00:31:13.323 | 1.00th=[13173], 5.00th=[19792], 10.00th=[24511], 20.00th=[25035], 00:31:13.323 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26084], 60.00th=[26346], 00:31:13.323 | 70.00th=[26608], 80.00th=[27657], 90.00th=[33817], 95.00th=[36963], 00:31:13.323 | 99.00th=[44827], 99.50th=[64750], 99.90th=[76022], 99.95th=[76022], 00:31:13.323 | 99.99th=[76022] 00:31:13.323 bw ( KiB/s): min= 2048, max= 2560, per=4.01%, avg=2339.95, stdev=135.01, samples=19 00:31:13.323 iops : min= 512, max= 640, avg=584.95, stdev=33.81, samples=19 00:31:13.323 lat (msec) : 10=0.10%, 20=5.02%, 50=94.23%, 100=0.64% 00:31:13.323 cpu : usr=97.59%, sys=2.02%, ctx=16, majf=0, minf=38 00:31:13.323 IO depths : 1=3.2%, 2=6.8%, 4=21.1%, 8=59.2%, 16=9.6%, 32=0.0%, >=64=0.0% 00:31:13.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.323 complete : 0=0.0%, 4=93.3%, 8=1.2%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.323 issued rwts: total=5894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.323 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.323 filename2: (groupid=0, jobs=1): err= 0: pid=2159402: Mon Jul 15 11:57:39 2024 00:31:13.323 read: IOPS=516, BW=2066KiB/s (2116kB/s)(20.2MiB/10003msec) 00:31:13.323 slat (nsec): min=6015, max=72185, avg=17709.83, stdev=10988.47 00:31:13.323 clat (usec): min=3399, max=49799, avg=30870.54, stdev=5757.93 00:31:13.323 lat (usec): min=3406, max=49819, avg=30888.25, stdev=5757.11 00:31:13.323 clat percentiles (usec): 00:31:13.323 | 1.00th=[13960], 5.00th=[24511], 10.00th=[25297], 20.00th=[26084], 00:31:13.323 | 30.00th=[26608], 40.00th=[28705], 50.00th=[30802], 60.00th=[32637], 00:31:13.323 | 70.00th=[34341], 80.00th=[35914], 90.00th=[38011], 95.00th=[40633], 00:31:13.323 | 99.00th=[43779], 99.50th=[44827], 99.90th=[46924], 99.95th=[49546], 00:31:13.323 | 99.99th=[49546] 00:31:13.323 bw ( KiB/s): min= 1792, max= 2400, per=3.53%, avg=2061.21, stdev=224.42, samples=19 00:31:13.323 iops : min= 448, max= 600, avg=515.26, stdev=56.13, samples=19 00:31:13.323 lat (msec) : 4=0.08%, 10=0.62%, 20=1.57%, 50=97.74% 00:31:13.323 cpu : usr=97.10%, sys=2.49%, ctx=17, majf=0, minf=45 00:31:13.323 IO depths : 1=0.1%, 2=0.3%, 4=14.5%, 8=71.7%, 16=13.4%, 32=0.0%, >=64=0.0% 00:31:13.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.323 complete : 0=0.0%, 4=92.8%, 8=2.6%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.323 issued rwts: total=5167,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.323 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.323 filename2: (groupid=0, jobs=1): err= 0: pid=2159403: Mon Jul 15 11:57:39 2024 00:31:13.323 read: IOPS=640, BW=2561KiB/s (2623kB/s)(25.0MiB/10013msec) 00:31:13.323 slat (usec): min=3, max=126, avg=18.18, stdev=13.32 00:31:13.323 clat (usec): min=4810, max=48670, avg=24846.97, stdev=4410.88 00:31:13.323 lat (usec): min=4817, max=48684, avg=24865.15, stdev=4411.78 00:31:13.323 clat percentiles (usec): 00:31:13.323 | 1.00th=[10421], 5.00th=[15926], 10.00th=[18744], 20.00th=[23987], 00:31:13.323 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[25822], 00:31:13.323 | 70.00th=[26084], 80.00th=[26608], 90.00th=[27132], 95.00th=[28967], 00:31:13.323 | 99.00th=[38536], 99.50th=[41157], 99.90th=[46400], 99.95th=[47449], 00:31:13.323 | 99.99th=[48497] 00:31:13.323 bw ( KiB/s): min= 2432, max= 2776, per=4.38%, avg=2558.40, stdev=97.72, samples=20 00:31:13.323 iops : min= 608, max= 694, avg=639.60, stdev=24.43, samples=20 00:31:13.323 lat (msec) : 10=0.94%, 20=10.64%, 50=88.43% 00:31:13.323 cpu : usr=96.88%, sys=2.63%, ctx=42, majf=0, minf=50 00:31:13.323 IO depths : 1=3.7%, 2=7.9%, 4=20.2%, 8=58.8%, 16=9.4%, 32=0.0%, >=64=0.0% 00:31:13.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.323 complete : 0=0.0%, 4=93.5%, 8=1.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.323 issued rwts: total=6412,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.323 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.323 filename2: (groupid=0, jobs=1): err= 0: pid=2159404: Mon Jul 15 11:57:39 2024 00:31:13.323 read: IOPS=613, BW=2454KiB/s (2513kB/s)(24.1MiB/10051msec) 00:31:13.323 slat (nsec): min=5633, max=96204, avg=26347.77, stdev=14452.52 00:31:13.323 clat (usec): min=7977, max=61555, avg=25836.29, stdev=4038.45 00:31:13.323 lat (usec): min=8011, max=61585, avg=25862.64, stdev=4037.56 00:31:13.323 clat percentiles (usec): 00:31:13.323 | 1.00th=[14222], 5.00th=[19792], 10.00th=[23987], 20.00th=[24773], 00:31:13.323 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25822], 60.00th=[26084], 00:31:13.323 | 70.00th=[26346], 80.00th=[26608], 90.00th=[27395], 95.00th=[33162], 00:31:13.323 | 99.00th=[40109], 99.50th=[44303], 99.90th=[61080], 99.95th=[61604], 00:31:13.323 | 99.99th=[61604] 00:31:13.323 bw ( KiB/s): min= 2256, max= 2784, per=4.21%, avg=2459.50, stdev=135.27, samples=20 00:31:13.323 iops : min= 564, max= 696, avg=614.80, stdev=33.84, samples=20 00:31:13.323 lat (msec) : 10=0.13%, 20=4.95%, 50=94.73%, 100=0.19% 00:31:13.323 cpu : usr=97.91%, sys=1.65%, ctx=121, majf=0, minf=47 00:31:13.323 IO depths : 1=3.9%, 2=8.5%, 4=20.0%, 8=58.0%, 16=9.6%, 32=0.0%, >=64=0.0% 00:31:13.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.323 complete : 0=0.0%, 4=93.4%, 8=1.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:13.323 issued rwts: total=6166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:13.323 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:13.323 00:31:13.323 Run status group 0 (all jobs): 00:31:13.323 READ: bw=57.0MiB/s (59.7MB/s), 2066KiB/s-2593KiB/s (2116kB/s-2656kB/s), io=573MiB (601MB), run=10003-10051msec 00:31:13.323 11:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:13.323 11:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:13.323 11:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:13.323 11:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:13.323 11:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:13.323 11:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:13.323 11:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.323 11:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.323 11:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.323 11:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:13.323 11:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.323 11:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.323 11:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.323 11:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:13.323 11:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:13.323 11:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:13.323 11:57:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:13.323 11:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.323 11:57:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.323 bdev_null0 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.323 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.324 [2024-07-15 11:57:40.061430] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.324 bdev_null1 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:13.324 { 00:31:13.324 "params": { 00:31:13.324 "name": "Nvme$subsystem", 00:31:13.324 "trtype": "$TEST_TRANSPORT", 00:31:13.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:13.324 "adrfam": "ipv4", 00:31:13.324 "trsvcid": "$NVMF_PORT", 00:31:13.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:13.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:13.324 "hdgst": ${hdgst:-false}, 00:31:13.324 "ddgst": ${ddgst:-false} 00:31:13.324 }, 00:31:13.324 "method": "bdev_nvme_attach_controller" 00:31:13.324 } 00:31:13.324 EOF 00:31:13.324 )") 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:13.324 { 00:31:13.324 "params": { 00:31:13.324 "name": "Nvme$subsystem", 00:31:13.324 "trtype": "$TEST_TRANSPORT", 00:31:13.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:13.324 "adrfam": "ipv4", 00:31:13.324 "trsvcid": "$NVMF_PORT", 00:31:13.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:13.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:13.324 "hdgst": ${hdgst:-false}, 00:31:13.324 "ddgst": ${ddgst:-false} 00:31:13.324 }, 00:31:13.324 "method": "bdev_nvme_attach_controller" 00:31:13.324 } 00:31:13.324 EOF 00:31:13.324 )") 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:13.324 "params": { 00:31:13.324 "name": "Nvme0", 00:31:13.324 "trtype": "tcp", 00:31:13.324 "traddr": "10.0.0.2", 00:31:13.324 "adrfam": "ipv4", 00:31:13.324 "trsvcid": "4420", 00:31:13.324 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:13.324 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:13.324 "hdgst": false, 00:31:13.324 "ddgst": false 00:31:13.324 }, 00:31:13.324 "method": "bdev_nvme_attach_controller" 00:31:13.324 },{ 00:31:13.324 "params": { 00:31:13.324 "name": "Nvme1", 00:31:13.324 "trtype": "tcp", 00:31:13.324 "traddr": "10.0.0.2", 00:31:13.324 "adrfam": "ipv4", 00:31:13.324 "trsvcid": "4420", 00:31:13.324 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:13.324 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:13.324 "hdgst": false, 00:31:13.324 "ddgst": false 00:31:13.324 }, 00:31:13.324 "method": "bdev_nvme_attach_controller" 00:31:13.324 }' 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:13.324 11:57:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:13.324 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:13.324 ... 00:31:13.324 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:13.324 ... 00:31:13.324 fio-3.35 00:31:13.324 Starting 4 threads 00:31:13.324 EAL: No free 2048 kB hugepages reported on node 1 00:31:18.574 00:31:18.574 filename0: (groupid=0, jobs=1): err= 0: pid=2161380: Mon Jul 15 11:57:46 2024 00:31:18.574 read: IOPS=2821, BW=22.0MiB/s (23.1MB/s)(110MiB/5003msec) 00:31:18.574 slat (nsec): min=2773, max=65286, avg=11409.42, stdev=6782.40 00:31:18.574 clat (usec): min=1086, max=6757, avg=2804.82, stdev=481.41 00:31:18.574 lat (usec): min=1091, max=6766, avg=2816.23, stdev=480.89 00:31:18.574 clat percentiles (usec): 00:31:18.574 | 1.00th=[ 1680], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2442], 00:31:18.574 | 30.00th=[ 2573], 40.00th=[ 2671], 50.00th=[ 2769], 60.00th=[ 2868], 00:31:18.574 | 70.00th=[ 2933], 80.00th=[ 3130], 90.00th=[ 3490], 95.00th=[ 3687], 00:31:18.574 | 99.00th=[ 4178], 99.50th=[ 4293], 99.90th=[ 4490], 99.95th=[ 6587], 00:31:18.574 | 99.99th=[ 6718] 00:31:18.574 bw ( KiB/s): min=21216, max=23728, per=25.82%, avg=22483.56, stdev=684.68, samples=9 00:31:18.574 iops : min= 2652, max= 2966, avg=2810.44, stdev=85.58, samples=9 00:31:18.574 lat (msec) : 2=2.95%, 4=94.86%, 10=2.20% 00:31:18.574 cpu : usr=94.72%, sys=4.92%, ctx=8, majf=0, minf=9 00:31:18.574 IO depths : 1=0.2%, 2=1.2%, 4=68.7%, 8=29.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:18.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.574 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.574 issued rwts: total=14114,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.574 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:18.574 filename0: (groupid=0, jobs=1): err= 0: pid=2161381: Mon Jul 15 11:57:46 2024 00:31:18.574 read: IOPS=2609, BW=20.4MiB/s (21.4MB/s)(102MiB/5002msec) 00:31:18.574 slat (usec): min=3, max=168, avg= 9.49, stdev= 4.32 00:31:18.574 clat (usec): min=1731, max=47621, avg=3039.97, stdev=1228.32 00:31:18.574 lat (usec): min=1750, max=47633, avg=3049.46, stdev=1228.12 00:31:18.574 clat percentiles (usec): 00:31:18.574 | 1.00th=[ 2147], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2573], 00:31:18.574 | 30.00th=[ 2704], 40.00th=[ 2802], 50.00th=[ 2900], 60.00th=[ 2966], 00:31:18.574 | 70.00th=[ 3195], 80.00th=[ 3458], 90.00th=[ 3785], 95.00th=[ 4047], 00:31:18.574 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 5407], 99.95th=[47449], 00:31:18.574 | 99.99th=[47449] 00:31:18.574 bw ( KiB/s): min=18484, max=22016, per=23.66%, avg=20603.11, stdev=1017.76, samples=9 00:31:18.574 iops : min= 2310, max= 2752, avg=2575.33, stdev=127.35, samples=9 00:31:18.574 lat (msec) : 2=0.24%, 4=93.80%, 10=5.90%, 50=0.06% 00:31:18.574 cpu : usr=94.12%, sys=5.52%, ctx=8, majf=0, minf=9 00:31:18.574 IO depths : 1=0.3%, 2=1.6%, 4=68.3%, 8=29.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:18.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.574 complete : 0=0.0%, 4=94.3%, 8=5.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.574 issued rwts: total=13052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.574 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:18.574 filename1: (groupid=0, jobs=1): err= 0: pid=2161382: Mon Jul 15 11:57:46 2024 00:31:18.574 read: IOPS=2709, BW=21.2MiB/s (22.2MB/s)(106MiB/5002msec) 00:31:18.574 slat (nsec): min=5752, max=51155, avg=9893.14, stdev=5039.08 00:31:18.574 clat (usec): min=809, max=44750, avg=2927.24, stdev=1097.07 00:31:18.574 lat (usec): min=820, max=44776, avg=2937.13, stdev=1097.17 00:31:18.574 clat percentiles (usec): 00:31:18.574 | 1.00th=[ 1958], 5.00th=[ 2212], 10.00th=[ 2409], 20.00th=[ 2606], 00:31:18.574 | 30.00th=[ 2704], 40.00th=[ 2835], 50.00th=[ 2900], 60.00th=[ 2966], 00:31:18.574 | 70.00th=[ 3097], 80.00th=[ 3163], 90.00th=[ 3392], 95.00th=[ 3621], 00:31:18.574 | 99.00th=[ 4146], 99.50th=[ 4293], 99.90th=[ 4555], 99.95th=[44827], 00:31:18.574 | 99.99th=[44827] 00:31:18.574 bw ( KiB/s): min=20176, max=22400, per=25.04%, avg=21802.67, stdev=687.58, samples=9 00:31:18.574 iops : min= 2522, max= 2800, avg=2725.56, stdev=86.01, samples=9 00:31:18.574 lat (usec) : 1000=0.01% 00:31:18.574 lat (msec) : 2=1.24%, 4=97.03%, 10=1.66%, 50=0.06% 00:31:18.574 cpu : usr=93.94%, sys=5.70%, ctx=8, majf=0, minf=9 00:31:18.574 IO depths : 1=0.2%, 2=1.4%, 4=67.1%, 8=31.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:18.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.574 complete : 0=0.0%, 4=95.3%, 8=4.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.574 issued rwts: total=13553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.574 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:18.574 filename1: (groupid=0, jobs=1): err= 0: pid=2161383: Mon Jul 15 11:57:46 2024 00:31:18.574 read: IOPS=2745, BW=21.4MiB/s (22.5MB/s)(107MiB/5001msec) 00:31:18.574 slat (nsec): min=5723, max=48326, avg=10230.23, stdev=5561.09 00:31:18.574 clat (usec): min=1184, max=4716, avg=2888.19, stdev=421.62 00:31:18.574 lat (usec): min=1190, max=4725, avg=2898.42, stdev=421.91 00:31:18.574 clat percentiles (usec): 00:31:18.574 | 1.00th=[ 1663], 5.00th=[ 2212], 10.00th=[ 2409], 20.00th=[ 2606], 00:31:18.574 | 30.00th=[ 2704], 40.00th=[ 2835], 50.00th=[ 2900], 60.00th=[ 2933], 00:31:18.574 | 70.00th=[ 3064], 80.00th=[ 3163], 90.00th=[ 3392], 95.00th=[ 3621], 00:31:18.574 | 99.00th=[ 4015], 99.50th=[ 4146], 99.90th=[ 4490], 99.95th=[ 4555], 00:31:18.574 | 99.99th=[ 4686] 00:31:18.574 bw ( KiB/s): min=21056, max=23150, per=25.41%, avg=22120.67, stdev=553.97, samples=9 00:31:18.574 iops : min= 2632, max= 2893, avg=2765.00, stdev=69.07, samples=9 00:31:18.574 lat (msec) : 2=2.29%, 4=96.58%, 10=1.13% 00:31:18.574 cpu : usr=94.32%, sys=5.32%, ctx=7, majf=0, minf=9 00:31:18.574 IO depths : 1=0.1%, 2=1.4%, 4=67.3%, 8=31.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:18.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.574 complete : 0=0.0%, 4=95.2%, 8=4.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.574 issued rwts: total=13728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.574 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:18.574 00:31:18.574 Run status group 0 (all jobs): 00:31:18.574 READ: bw=85.0MiB/s (89.2MB/s), 20.4MiB/s-22.0MiB/s (21.4MB/s-23.1MB/s), io=425MiB (446MB), run=5001-5003msec 00:31:18.574 11:57:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:18.574 11:57:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:18.574 11:57:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:18.574 11:57:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:18.574 11:57:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:18.574 11:57:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:18.574 11:57:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.575 11:57:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:18.575 11:57:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.575 11:57:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:18.575 11:57:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.575 11:57:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:18.575 11:57:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.575 11:57:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:18.575 11:57:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:18.575 11:57:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:18.575 11:57:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:18.575 11:57:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.575 11:57:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:18.575 11:57:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.575 11:57:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:18.575 11:57:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.575 11:57:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:18.575 11:57:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.575 00:31:18.575 real 0m24.412s 00:31:18.575 user 4m53.289s 00:31:18.575 sys 0m9.334s 00:31:18.575 11:57:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:18.575 11:57:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:18.575 ************************************ 00:31:18.575 END TEST fio_dif_rand_params 00:31:18.575 ************************************ 00:31:18.575 11:57:46 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:18.575 11:57:46 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:18.575 11:57:46 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:18.575 11:57:46 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:18.575 11:57:46 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:18.575 ************************************ 00:31:18.575 START TEST fio_dif_digest 00:31:18.575 ************************************ 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:18.575 bdev_null0 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:18.575 [2024-07-15 11:57:46.555232] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:18.575 { 00:31:18.575 "params": { 00:31:18.575 "name": "Nvme$subsystem", 00:31:18.575 "trtype": "$TEST_TRANSPORT", 00:31:18.575 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:18.575 "adrfam": "ipv4", 00:31:18.575 "trsvcid": "$NVMF_PORT", 00:31:18.575 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:18.575 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:18.575 "hdgst": ${hdgst:-false}, 00:31:18.575 "ddgst": ${ddgst:-false} 00:31:18.575 }, 00:31:18.575 "method": "bdev_nvme_attach_controller" 00:31:18.575 } 00:31:18.575 EOF 00:31:18.575 )") 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:18.575 "params": { 00:31:18.575 "name": "Nvme0", 00:31:18.575 "trtype": "tcp", 00:31:18.575 "traddr": "10.0.0.2", 00:31:18.575 "adrfam": "ipv4", 00:31:18.575 "trsvcid": "4420", 00:31:18.575 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:18.575 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:18.575 "hdgst": true, 00:31:18.575 "ddgst": true 00:31:18.575 }, 00:31:18.575 "method": "bdev_nvme_attach_controller" 00:31:18.575 }' 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:18.575 11:57:46 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:19.141 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:19.141 ... 00:31:19.141 fio-3.35 00:31:19.141 Starting 3 threads 00:31:19.141 EAL: No free 2048 kB hugepages reported on node 1 00:31:31.391 00:31:31.391 filename0: (groupid=0, jobs=1): err= 0: pid=2162589: Mon Jul 15 11:57:57 2024 00:31:31.391 read: IOPS=296, BW=37.0MiB/s (38.8MB/s)(372MiB/10048msec) 00:31:31.391 slat (nsec): min=3892, max=63939, avg=14172.95, stdev=5300.89 00:31:31.391 clat (usec): min=5755, max=51073, avg=10089.75, stdev=1702.33 00:31:31.391 lat (usec): min=5764, max=51092, avg=10103.92, stdev=1702.48 00:31:31.391 clat percentiles (usec): 00:31:31.391 | 1.00th=[ 6849], 5.00th=[ 7373], 10.00th=[ 7767], 20.00th=[ 8848], 00:31:31.391 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:31:31.391 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11863], 00:31:31.391 | 99.00th=[12518], 99.50th=[12780], 99.90th=[15008], 99.95th=[49021], 00:31:31.391 | 99.99th=[51119] 00:31:31.391 bw ( KiB/s): min=34816, max=43264, per=36.62%, avg=38092.80, stdev=2040.58, samples=20 00:31:31.391 iops : min= 272, max= 338, avg=297.60, stdev=15.94, samples=20 00:31:31.391 lat (msec) : 10=36.90%, 20=63.03%, 50=0.03%, 100=0.03% 00:31:31.391 cpu : usr=93.16%, sys=6.49%, ctx=13, majf=0, minf=75 00:31:31.391 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:31.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.391 issued rwts: total=2978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.391 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:31.391 filename0: (groupid=0, jobs=1): err= 0: pid=2162590: Mon Jul 15 11:57:57 2024 00:31:31.391 read: IOPS=289, BW=36.2MiB/s (38.0MB/s)(364MiB/10045msec) 00:31:31.391 slat (nsec): min=6149, max=36879, avg=12545.58, stdev=4465.10 00:31:31.391 clat (usec): min=6268, max=51811, avg=10318.72, stdev=1643.83 00:31:31.391 lat (usec): min=6289, max=51818, avg=10331.27, stdev=1643.71 00:31:31.391 clat percentiles (usec): 00:31:31.391 | 1.00th=[ 7308], 5.00th=[ 7701], 10.00th=[ 8029], 20.00th=[ 9372], 00:31:31.391 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10814], 00:31:31.391 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[11994], 00:31:31.391 | 99.00th=[12780], 99.50th=[13042], 99.90th=[14746], 99.95th=[45876], 00:31:31.391 | 99.99th=[51643] 00:31:31.391 bw ( KiB/s): min=35072, max=39680, per=35.81%, avg=37248.00, stdev=1388.57, samples=20 00:31:31.391 iops : min= 274, max= 310, avg=291.00, stdev=10.85, samples=20 00:31:31.391 lat (msec) : 10=30.53%, 20=69.40%, 50=0.03%, 100=0.03% 00:31:31.391 cpu : usr=92.02%, sys=7.64%, ctx=16, majf=0, minf=123 00:31:31.391 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:31.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.391 issued rwts: total=2912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.391 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:31.391 filename0: (groupid=0, jobs=1): err= 0: pid=2162591: Mon Jul 15 11:57:57 2024 00:31:31.391 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(285MiB/10004msec) 00:31:31.391 slat (nsec): min=6159, max=36954, avg=13399.33, stdev=4458.22 00:31:31.391 clat (usec): min=5679, max=94366, avg=13170.43, stdev=9035.60 00:31:31.391 lat (usec): min=5690, max=94389, avg=13183.83, stdev=9035.55 00:31:31.391 clat percentiles (usec): 00:31:31.391 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:31:31.391 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:31:31.391 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12649], 95.00th=[13960], 00:31:31.391 | 99.00th=[53740], 99.50th=[54264], 99.90th=[56361], 99.95th=[93848], 00:31:31.391 | 99.99th=[93848] 00:31:31.391 bw ( KiB/s): min=19968, max=35072, per=28.25%, avg=29386.11, stdev=3932.36, samples=19 00:31:31.391 iops : min= 156, max= 274, avg=229.58, stdev=30.72, samples=19 00:31:31.391 lat (msec) : 10=6.46%, 20=88.88%, 100=4.66% 00:31:31.391 cpu : usr=92.67%, sys=7.00%, ctx=16, majf=0, minf=165 00:31:31.391 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:31.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:31.391 issued rwts: total=2276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:31.391 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:31.391 00:31:31.391 Run status group 0 (all jobs): 00:31:31.391 READ: bw=102MiB/s (107MB/s), 28.4MiB/s-37.0MiB/s (29.8MB/s-38.8MB/s), io=1021MiB (1070MB), run=10004-10048msec 00:31:31.391 11:57:57 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:31.391 11:57:57 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:31.391 11:57:57 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:31.391 11:57:57 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:31.391 11:57:57 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:31.391 11:57:57 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:31.391 11:57:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.391 11:57:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:31.391 11:57:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.391 11:57:57 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:31.391 11:57:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.391 11:57:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:31.391 11:57:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.391 00:31:31.391 real 0m11.232s 00:31:31.391 user 0m36.479s 00:31:31.391 sys 0m2.458s 00:31:31.391 11:57:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:31.391 11:57:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:31.391 ************************************ 00:31:31.391 END TEST fio_dif_digest 00:31:31.392 ************************************ 00:31:31.392 11:57:57 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:31.392 11:57:57 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:31.392 11:57:57 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:31.392 11:57:57 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:31.392 11:57:57 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:31.392 11:57:57 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:31.392 11:57:57 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:31.392 11:57:57 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:31.392 11:57:57 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:31.392 rmmod nvme_tcp 00:31:31.392 rmmod nvme_fabrics 00:31:31.392 rmmod nvme_keyring 00:31:31.392 11:57:57 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:31.392 11:57:57 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:31.392 11:57:57 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:31.392 11:57:57 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2153131 ']' 00:31:31.392 11:57:57 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2153131 00:31:31.392 11:57:57 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2153131 ']' 00:31:31.392 11:57:57 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2153131 00:31:31.392 11:57:57 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:31:31.392 11:57:57 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:31.392 11:57:57 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2153131 00:31:31.392 11:57:57 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:31.392 11:57:57 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:31.392 11:57:57 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2153131' 00:31:31.392 killing process with pid 2153131 00:31:31.392 11:57:57 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2153131 00:31:31.392 11:57:57 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2153131 00:31:31.392 11:57:58 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:31.392 11:57:58 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:32.761 Waiting for block devices as requested 00:31:32.761 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:32.761 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:33.018 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:33.018 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:33.018 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:33.274 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:33.274 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:33.274 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:33.274 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:33.531 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:33.531 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:33.531 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:33.790 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:33.791 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:33.791 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:34.048 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:34.048 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:31:34.305 11:58:02 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:34.305 11:58:02 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:34.305 11:58:02 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:34.305 11:58:02 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:34.305 11:58:02 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:34.305 11:58:02 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:34.305 11:58:02 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.198 11:58:04 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:36.198 00:31:36.198 real 1m15.276s 00:31:36.198 user 7m13.732s 00:31:36.198 sys 0m29.052s 00:31:36.198 11:58:04 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:36.198 11:58:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:36.198 ************************************ 00:31:36.198 END TEST nvmf_dif 00:31:36.198 ************************************ 00:31:36.461 11:58:04 -- common/autotest_common.sh@1142 -- # return 0 00:31:36.461 11:58:04 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:36.461 11:58:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:36.461 11:58:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:36.461 11:58:04 -- common/autotest_common.sh@10 -- # set +x 00:31:36.461 ************************************ 00:31:36.461 START TEST nvmf_abort_qd_sizes 00:31:36.461 ************************************ 00:31:36.461 11:58:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:36.461 * Looking for test storage... 00:31:36.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:36.461 11:58:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:36.461 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:36.461 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:36.461 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:36.461 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:36.461 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:36.461 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:36.461 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:36.461 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:36.461 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:36.461 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:36.461 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:36.461 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:31:36.461 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:31:36.461 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:36.461 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:36.462 11:58:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:43.023 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:43.023 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:43.023 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:43.023 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:43.023 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:43.023 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:43.023 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:43.023 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:43.023 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:43.023 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:43.023 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:43.023 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:43.023 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:43.023 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:43.023 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:43.023 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:43.023 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:43.023 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:43.023 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:43.023 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:43.023 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:43.024 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:43.024 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:43.024 Found net devices under 0000:af:00.0: cvl_0_0 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:43.024 Found net devices under 0000:af:00.1: cvl_0_1 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:43.024 11:58:10 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:43.024 11:58:11 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:43.024 11:58:11 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:43.024 11:58:11 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:43.024 11:58:11 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:43.281 11:58:11 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:43.281 11:58:11 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:43.281 11:58:11 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:43.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:43.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:31:43.281 00:31:43.281 --- 10.0.0.2 ping statistics --- 00:31:43.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.281 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:31:43.281 11:58:11 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:43.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:43.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:31:43.281 00:31:43.281 --- 10.0.0.1 ping statistics --- 00:31:43.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.281 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:31:43.281 11:58:11 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:43.281 11:58:11 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:43.281 11:58:11 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:43.281 11:58:11 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:46.559 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:46.559 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:46.559 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:46.559 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:46.559 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:46.559 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:46.559 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:46.559 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:46.559 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:46.559 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:46.559 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:46.559 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:46.559 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:46.559 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:46.559 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:46.559 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:47.932 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:31:47.932 11:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:47.932 11:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:47.932 11:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:47.932 11:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:47.932 11:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:47.932 11:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:47.932 11:58:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:47.932 11:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:47.932 11:58:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:47.932 11:58:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:47.932 11:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2170837 00:31:47.932 11:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:47.932 11:58:15 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2170837 00:31:47.932 11:58:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2170837 ']' 00:31:47.932 11:58:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:47.932 11:58:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:47.932 11:58:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:47.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:47.932 11:58:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:47.932 11:58:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:47.932 [2024-07-15 11:58:15.940964] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:31:47.932 [2024-07-15 11:58:15.941014] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:47.932 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.932 [2024-07-15 11:58:16.036350] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:48.190 [2024-07-15 11:58:16.112974] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:48.190 [2024-07-15 11:58:16.113012] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:48.190 [2024-07-15 11:58:16.113022] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:48.190 [2024-07-15 11:58:16.113031] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:48.190 [2024-07-15 11:58:16.113038] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:48.190 [2024-07-15 11:58:16.113077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.190 [2024-07-15 11:58:16.113094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:48.190 [2024-07-15 11:58:16.113183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:48.190 [2024-07-15 11:58:16.113185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:d8:00.0 ]] 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:d8:00.0 ]] 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:d8:00.0 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:d8:00.0 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:48.756 11:58:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:48.756 ************************************ 00:31:48.756 START TEST spdk_target_abort 00:31:48.756 ************************************ 00:31:48.756 11:58:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:31:48.756 11:58:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:48.756 11:58:16 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:d8:00.0 -b spdk_target 00:31:48.756 11:58:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.756 11:58:16 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:52.035 spdk_targetn1 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:52.035 [2024-07-15 11:58:19.690541] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:52.035 [2024-07-15 11:58:19.726790] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:52.035 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:52.036 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:52.036 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:52.036 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:52.036 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:52.036 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:52.036 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:52.036 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:52.036 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:52.036 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:52.036 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:52.036 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:52.036 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:52.036 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:52.036 11:58:19 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:52.036 EAL: No free 2048 kB hugepages reported on node 1 00:31:55.316 Initializing NVMe Controllers 00:31:55.316 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:55.316 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:55.316 Initialization complete. Launching workers. 00:31:55.316 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10232, failed: 0 00:31:55.316 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1314, failed to submit 8918 00:31:55.316 success 854, unsuccess 460, failed 0 00:31:55.316 11:58:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:55.316 11:58:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:55.316 EAL: No free 2048 kB hugepages reported on node 1 00:31:58.592 Initializing NVMe Controllers 00:31:58.592 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:31:58.592 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:58.592 Initialization complete. Launching workers. 00:31:58.592 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8696, failed: 0 00:31:58.593 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1277, failed to submit 7419 00:31:58.593 success 298, unsuccess 979, failed 0 00:31:58.593 11:58:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:58.593 11:58:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:58.593 EAL: No free 2048 kB hugepages reported on node 1 00:32:01.870 Initializing NVMe Controllers 00:32:01.870 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:01.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:01.870 Initialization complete. Launching workers. 00:32:01.870 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38755, failed: 0 00:32:01.870 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2771, failed to submit 35984 00:32:01.870 success 595, unsuccess 2176, failed 0 00:32:01.870 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:01.870 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.870 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:01.870 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.870 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:01.870 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.870 11:58:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:03.243 11:58:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.243 11:58:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2170837 00:32:03.243 11:58:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2170837 ']' 00:32:03.243 11:58:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2170837 00:32:03.243 11:58:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:32:03.243 11:58:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:03.243 11:58:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2170837 00:32:03.243 11:58:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:03.243 11:58:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:03.243 11:58:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2170837' 00:32:03.243 killing process with pid 2170837 00:32:03.243 11:58:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2170837 00:32:03.243 11:58:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2170837 00:32:03.502 00:32:03.502 real 0m14.671s 00:32:03.502 user 0m57.881s 00:32:03.502 sys 0m2.864s 00:32:03.502 11:58:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:03.502 11:58:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:03.502 ************************************ 00:32:03.502 END TEST spdk_target_abort 00:32:03.502 ************************************ 00:32:03.502 11:58:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:03.502 11:58:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:03.502 11:58:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:03.502 11:58:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:03.502 11:58:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:03.502 ************************************ 00:32:03.502 START TEST kernel_target_abort 00:32:03.502 ************************************ 00:32:03.502 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:32:03.502 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:03.502 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:32:03.502 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:03.502 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:03.502 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.502 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.502 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:03.502 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.502 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:03.502 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:03.502 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:03.760 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:03.760 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:03.760 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:03.760 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:03.760 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:03.760 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:03.760 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:32:03.760 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:03.760 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:03.760 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:03.760 11:58:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:07.115 Waiting for block devices as requested 00:32:07.115 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:07.115 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:07.115 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:07.115 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:07.115 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:07.115 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:07.115 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:07.115 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:07.374 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:07.374 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:07.374 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:07.632 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:07.632 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:07.632 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:07.889 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:07.889 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:07.889 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:08.147 No valid GPT data, bailing 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e --hostid=006f0d1b-21c0-e711-906e-00163566263e -a 10.0.0.1 -t tcp -s 4420 00:32:08.147 00:32:08.147 Discovery Log Number of Records 2, Generation counter 2 00:32:08.147 =====Discovery Log Entry 0====== 00:32:08.147 trtype: tcp 00:32:08.147 adrfam: ipv4 00:32:08.147 subtype: current discovery subsystem 00:32:08.147 treq: not specified, sq flow control disable supported 00:32:08.147 portid: 1 00:32:08.147 trsvcid: 4420 00:32:08.147 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:08.147 traddr: 10.0.0.1 00:32:08.147 eflags: none 00:32:08.147 sectype: none 00:32:08.147 =====Discovery Log Entry 1====== 00:32:08.147 trtype: tcp 00:32:08.147 adrfam: ipv4 00:32:08.147 subtype: nvme subsystem 00:32:08.147 treq: not specified, sq flow control disable supported 00:32:08.147 portid: 1 00:32:08.147 trsvcid: 4420 00:32:08.147 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:08.147 traddr: 10.0.0.1 00:32:08.147 eflags: none 00:32:08.147 sectype: none 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:08.147 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:08.148 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:08.148 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:08.148 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:08.148 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:08.148 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:08.148 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:08.148 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:08.148 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:08.148 11:58:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:08.148 EAL: No free 2048 kB hugepages reported on node 1 00:32:11.421 Initializing NVMe Controllers 00:32:11.421 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:11.421 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:11.421 Initialization complete. Launching workers. 00:32:11.421 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 72767, failed: 0 00:32:11.421 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 72767, failed to submit 0 00:32:11.421 success 0, unsuccess 72767, failed 0 00:32:11.421 11:58:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:11.421 11:58:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:11.421 EAL: No free 2048 kB hugepages reported on node 1 00:32:14.691 Initializing NVMe Controllers 00:32:14.691 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:14.691 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:14.691 Initialization complete. Launching workers. 00:32:14.691 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 125304, failed: 0 00:32:14.691 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31462, failed to submit 93842 00:32:14.691 success 0, unsuccess 31462, failed 0 00:32:14.691 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:14.691 11:58:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:14.691 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.994 Initializing NVMe Controllers 00:32:17.994 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:17.994 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:17.994 Initialization complete. Launching workers. 00:32:17.994 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 122448, failed: 0 00:32:17.994 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30642, failed to submit 91806 00:32:17.994 success 0, unsuccess 30642, failed 0 00:32:17.994 11:58:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:17.994 11:58:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:17.994 11:58:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:17.994 11:58:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:17.994 11:58:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:17.994 11:58:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:17.994 11:58:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:17.994 11:58:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:17.994 11:58:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:17.994 11:58:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:20.522 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:20.522 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:20.522 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:20.522 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:20.522 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:20.522 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:20.522 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:20.522 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:20.780 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:32:20.780 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:32:20.780 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:32:20.780 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:32:20.780 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:32:20.780 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:32:20.780 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:32:20.780 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:32:22.155 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:32:22.413 00:32:22.413 real 0m18.694s 00:32:22.413 user 0m7.737s 00:32:22.413 sys 0m5.961s 00:32:22.413 11:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:22.413 11:58:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:22.413 ************************************ 00:32:22.413 END TEST kernel_target_abort 00:32:22.413 ************************************ 00:32:22.413 11:58:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:22.413 11:58:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:22.413 11:58:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:22.413 11:58:50 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:22.413 11:58:50 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:22.413 11:58:50 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:22.413 11:58:50 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:22.413 11:58:50 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:22.413 11:58:50 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:22.413 rmmod nvme_tcp 00:32:22.413 rmmod nvme_fabrics 00:32:22.413 rmmod nvme_keyring 00:32:22.413 11:58:50 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:22.413 11:58:50 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:22.413 11:58:50 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:22.413 11:58:50 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2170837 ']' 00:32:22.413 11:58:50 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2170837 00:32:22.413 11:58:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2170837 ']' 00:32:22.413 11:58:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2170837 00:32:22.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2170837) - No such process 00:32:22.413 11:58:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2170837 is not found' 00:32:22.413 Process with pid 2170837 is not found 00:32:22.413 11:58:50 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:22.413 11:58:50 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:24.938 Waiting for block devices as requested 00:32:24.938 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:25.195 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:25.195 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:25.195 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:25.452 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:25.452 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:25.452 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:25.709 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:25.709 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:25.709 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:25.965 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:25.965 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:25.965 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:26.222 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:26.222 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:26.222 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:26.479 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:32:26.479 11:58:54 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:26.479 11:58:54 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:26.479 11:58:54 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:26.479 11:58:54 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:26.479 11:58:54 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:26.479 11:58:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:26.479 11:58:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:29.009 11:58:56 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:29.009 00:32:29.009 real 0m52.209s 00:32:29.009 user 1m9.728s 00:32:29.009 sys 0m18.593s 00:32:29.009 11:58:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:29.009 11:58:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:29.009 ************************************ 00:32:29.009 END TEST nvmf_abort_qd_sizes 00:32:29.009 ************************************ 00:32:29.009 11:58:56 -- common/autotest_common.sh@1142 -- # return 0 00:32:29.009 11:58:56 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:29.009 11:58:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:29.009 11:58:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:29.009 11:58:56 -- common/autotest_common.sh@10 -- # set +x 00:32:29.009 ************************************ 00:32:29.009 START TEST keyring_file 00:32:29.009 ************************************ 00:32:29.009 11:58:56 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:29.009 * Looking for test storage... 00:32:29.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:29.009 11:58:56 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:29.009 11:58:56 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:29.009 11:58:56 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:29.009 11:58:56 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:29.009 11:58:56 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:29.009 11:58:56 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.009 11:58:56 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.009 11:58:56 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.009 11:58:56 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:29.009 11:58:56 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:29.009 11:58:56 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:29.009 11:58:56 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:29.009 11:58:56 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:29.009 11:58:56 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:29.009 11:58:56 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:29.009 11:58:56 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:29.009 11:58:56 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:29.009 11:58:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:29.009 11:58:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:29.009 11:58:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:29.009 11:58:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:29.009 11:58:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:29.009 11:58:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.dr2XzTUE0Q 00:32:29.009 11:58:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:29.009 11:58:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.dr2XzTUE0Q 00:32:29.009 11:58:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.dr2XzTUE0Q 00:32:29.009 11:58:56 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.dr2XzTUE0Q 00:32:29.009 11:58:56 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:29.009 11:58:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:29.009 11:58:56 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:29.009 11:58:56 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:29.009 11:58:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:29.009 11:58:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:29.009 11:58:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.PSYQVH6BaI 00:32:29.009 11:58:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:29.009 11:58:56 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:29.010 11:58:56 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:29.010 11:58:56 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:29.010 11:58:56 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:29.010 11:58:56 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:29.010 11:58:56 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:29.010 11:58:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.PSYQVH6BaI 00:32:29.010 11:58:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.PSYQVH6BaI 00:32:29.010 11:58:56 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.PSYQVH6BaI 00:32:29.010 11:58:56 keyring_file -- keyring/file.sh@30 -- # tgtpid=2179979 00:32:29.010 11:58:56 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2179979 00:32:29.010 11:58:56 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2179979 ']' 00:32:29.010 11:58:56 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.010 11:58:56 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:29.010 11:58:56 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:29.010 11:58:56 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:29.010 11:58:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:29.010 11:58:56 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:29.010 [2024-07-15 11:58:56.890037] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:32:29.010 [2024-07-15 11:58:56.890097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179979 ] 00:32:29.010 EAL: No free 2048 kB hugepages reported on node 1 00:32:29.010 [2024-07-15 11:58:56.958283] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.010 [2024-07-15 11:58:57.027855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:29.576 11:58:57 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:29.576 11:58:57 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:29.576 11:58:57 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:29.576 11:58:57 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.576 11:58:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:29.576 [2024-07-15 11:58:57.666116] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:29.835 null0 00:32:29.835 [2024-07-15 11:58:57.698171] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:29.835 [2024-07-15 11:58:57.698478] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:29.835 [2024-07-15 11:58:57.706190] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:29.835 11:58:57 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.835 11:58:57 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:29.835 11:58:57 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:29.835 11:58:57 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:29.835 11:58:57 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:29.835 11:58:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:29.835 11:58:57 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:29.835 11:58:57 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:29.835 11:58:57 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:29.835 11:58:57 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.835 11:58:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:29.835 [2024-07-15 11:58:57.718222] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:29.835 request: 00:32:29.835 { 00:32:29.835 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:29.835 "secure_channel": false, 00:32:29.835 "listen_address": { 00:32:29.835 "trtype": "tcp", 00:32:29.835 "traddr": "127.0.0.1", 00:32:29.835 "trsvcid": "4420" 00:32:29.835 }, 00:32:29.835 "method": "nvmf_subsystem_add_listener", 00:32:29.835 "req_id": 1 00:32:29.835 } 00:32:29.835 Got JSON-RPC error response 00:32:29.835 response: 00:32:29.835 { 00:32:29.835 "code": -32602, 00:32:29.835 "message": "Invalid parameters" 00:32:29.835 } 00:32:29.835 11:58:57 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:29.835 11:58:57 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:29.835 11:58:57 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:29.835 11:58:57 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:29.835 11:58:57 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:29.835 11:58:57 keyring_file -- keyring/file.sh@46 -- # bperfpid=2180220 00:32:29.835 11:58:57 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2180220 /var/tmp/bperf.sock 00:32:29.835 11:58:57 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2180220 ']' 00:32:29.835 11:58:57 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:29.835 11:58:57 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:29.835 11:58:57 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:29.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:29.835 11:58:57 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:29.835 11:58:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:29.835 11:58:57 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:29.835 [2024-07-15 11:58:57.768018] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:32:29.835 [2024-07-15 11:58:57.768065] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180220 ] 00:32:29.835 EAL: No free 2048 kB hugepages reported on node 1 00:32:29.835 [2024-07-15 11:58:57.837423] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:29.835 [2024-07-15 11:58:57.912100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.790 11:58:58 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:30.790 11:58:58 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:30.790 11:58:58 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dr2XzTUE0Q 00:32:30.790 11:58:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dr2XzTUE0Q 00:32:30.790 11:58:58 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.PSYQVH6BaI 00:32:30.790 11:58:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.PSYQVH6BaI 00:32:31.048 11:58:58 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:31.048 11:58:58 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:31.048 11:58:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:31.048 11:58:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:31.048 11:58:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:31.048 11:58:59 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.dr2XzTUE0Q == \/\t\m\p\/\t\m\p\.\d\r\2\X\z\T\U\E\0\Q ]] 00:32:31.048 11:58:59 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:31.048 11:58:59 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:31.048 11:58:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:31.048 11:58:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:31.048 11:58:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:31.305 11:58:59 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.PSYQVH6BaI == \/\t\m\p\/\t\m\p\.\P\S\Y\Q\V\H\6\B\a\I ]] 00:32:31.305 11:58:59 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:31.305 11:58:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:31.305 11:58:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:31.305 11:58:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:31.305 11:58:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:31.305 11:58:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:31.563 11:58:59 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:31.563 11:58:59 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:31.563 11:58:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:31.563 11:58:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:31.563 11:58:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:31.563 11:58:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:31.563 11:58:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:31.563 11:58:59 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:31.563 11:58:59 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:31.563 11:58:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:31.821 [2024-07-15 11:58:59.765962] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:31.821 nvme0n1 00:32:31.821 11:58:59 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:31.821 11:58:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:31.821 11:58:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:31.821 11:58:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:31.821 11:58:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:31.821 11:58:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:32.078 11:59:00 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:32.078 11:59:00 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:32.078 11:59:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:32.078 11:59:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:32.078 11:59:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:32.078 11:59:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:32.078 11:59:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:32.335 11:59:00 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:32.335 11:59:00 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:32.335 Running I/O for 1 seconds... 00:32:33.265 00:32:33.265 Latency(us) 00:32:33.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:33.265 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:33.265 nvme0n1 : 1.01 13161.83 51.41 0.00 0.00 9695.92 5478.81 19293.80 00:32:33.265 =================================================================================================================== 00:32:33.265 Total : 13161.83 51.41 0.00 0.00 9695.92 5478.81 19293.80 00:32:33.265 0 00:32:33.265 11:59:01 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:33.265 11:59:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:33.523 11:59:01 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:33.523 11:59:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:33.523 11:59:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:33.523 11:59:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:33.523 11:59:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:33.523 11:59:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:33.780 11:59:01 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:33.780 11:59:01 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:33.781 11:59:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:33.781 11:59:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:33.781 11:59:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:33.781 11:59:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:33.781 11:59:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:33.781 11:59:01 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:33.781 11:59:01 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:33.781 11:59:01 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:33.781 11:59:01 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:33.781 11:59:01 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:33.781 11:59:01 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:33.781 11:59:01 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:33.781 11:59:01 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:33.781 11:59:01 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:33.781 11:59:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:34.039 [2024-07-15 11:59:02.024725] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:34.039 [2024-07-15 11:59:02.025444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d02840 (107): Transport endpoint is not connected 00:32:34.039 [2024-07-15 11:59:02.026437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d02840 (9): Bad file descriptor 00:32:34.039 [2024-07-15 11:59:02.027438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:34.039 [2024-07-15 11:59:02.027451] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:34.039 [2024-07-15 11:59:02.027461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:34.039 request: 00:32:34.039 { 00:32:34.039 "name": "nvme0", 00:32:34.039 "trtype": "tcp", 00:32:34.039 "traddr": "127.0.0.1", 00:32:34.039 "adrfam": "ipv4", 00:32:34.039 "trsvcid": "4420", 00:32:34.039 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:34.039 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:34.039 "prchk_reftag": false, 00:32:34.039 "prchk_guard": false, 00:32:34.039 "hdgst": false, 00:32:34.039 "ddgst": false, 00:32:34.039 "psk": "key1", 00:32:34.039 "method": "bdev_nvme_attach_controller", 00:32:34.039 "req_id": 1 00:32:34.039 } 00:32:34.039 Got JSON-RPC error response 00:32:34.039 response: 00:32:34.039 { 00:32:34.039 "code": -5, 00:32:34.039 "message": "Input/output error" 00:32:34.039 } 00:32:34.039 11:59:02 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:34.039 11:59:02 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:34.039 11:59:02 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:34.039 11:59:02 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:34.039 11:59:02 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:34.039 11:59:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:34.039 11:59:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:34.039 11:59:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:34.039 11:59:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:34.039 11:59:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:34.297 11:59:02 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:34.297 11:59:02 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:34.297 11:59:02 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:34.297 11:59:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:34.297 11:59:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:34.297 11:59:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:34.297 11:59:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:34.297 11:59:02 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:34.297 11:59:02 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:34.297 11:59:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:34.553 11:59:02 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:34.553 11:59:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:34.810 11:59:02 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:34.810 11:59:02 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:34.810 11:59:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:34.810 11:59:02 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:34.810 11:59:02 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.dr2XzTUE0Q 00:32:34.810 11:59:02 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.dr2XzTUE0Q 00:32:34.810 11:59:02 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:34.810 11:59:02 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.dr2XzTUE0Q 00:32:34.810 11:59:02 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:34.810 11:59:02 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:34.810 11:59:02 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:34.810 11:59:02 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:34.810 11:59:02 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dr2XzTUE0Q 00:32:34.810 11:59:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dr2XzTUE0Q 00:32:35.067 [2024-07-15 11:59:03.048383] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.dr2XzTUE0Q': 0100660 00:32:35.067 [2024-07-15 11:59:03.048409] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:35.067 request: 00:32:35.067 { 00:32:35.067 "name": "key0", 00:32:35.067 "path": "/tmp/tmp.dr2XzTUE0Q", 00:32:35.067 "method": "keyring_file_add_key", 00:32:35.067 "req_id": 1 00:32:35.067 } 00:32:35.067 Got JSON-RPC error response 00:32:35.067 response: 00:32:35.067 { 00:32:35.067 "code": -1, 00:32:35.067 "message": "Operation not permitted" 00:32:35.067 } 00:32:35.067 11:59:03 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:35.067 11:59:03 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:35.067 11:59:03 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:35.067 11:59:03 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:35.067 11:59:03 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.dr2XzTUE0Q 00:32:35.067 11:59:03 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.dr2XzTUE0Q 00:32:35.067 11:59:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.dr2XzTUE0Q 00:32:35.324 11:59:03 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.dr2XzTUE0Q 00:32:35.324 11:59:03 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:35.324 11:59:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:35.324 11:59:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:35.324 11:59:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:35.324 11:59:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:35.324 11:59:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:35.324 11:59:03 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:35.324 11:59:03 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:35.324 11:59:03 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:32:35.324 11:59:03 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:35.324 11:59:03 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:35.324 11:59:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:35.324 11:59:03 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:35.324 11:59:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:35.324 11:59:03 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:35.324 11:59:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:35.581 [2024-07-15 11:59:03.573770] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.dr2XzTUE0Q': No such file or directory 00:32:35.581 [2024-07-15 11:59:03.573794] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:35.581 [2024-07-15 11:59:03.573816] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:35.581 [2024-07-15 11:59:03.573824] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:35.581 [2024-07-15 11:59:03.573842] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:35.581 request: 00:32:35.581 { 00:32:35.581 "name": "nvme0", 00:32:35.581 "trtype": "tcp", 00:32:35.581 "traddr": "127.0.0.1", 00:32:35.581 "adrfam": "ipv4", 00:32:35.581 "trsvcid": "4420", 00:32:35.581 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:35.581 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:35.581 "prchk_reftag": false, 00:32:35.581 "prchk_guard": false, 00:32:35.581 "hdgst": false, 00:32:35.581 "ddgst": false, 00:32:35.581 "psk": "key0", 00:32:35.581 "method": "bdev_nvme_attach_controller", 00:32:35.581 "req_id": 1 00:32:35.581 } 00:32:35.581 Got JSON-RPC error response 00:32:35.581 response: 00:32:35.581 { 00:32:35.581 "code": -19, 00:32:35.581 "message": "No such device" 00:32:35.581 } 00:32:35.581 11:59:03 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:32:35.581 11:59:03 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:35.581 11:59:03 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:35.581 11:59:03 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:35.581 11:59:03 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:35.581 11:59:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:35.839 11:59:03 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:35.839 11:59:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:35.839 11:59:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:35.839 11:59:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:35.839 11:59:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:35.839 11:59:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:35.839 11:59:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.gMsJgHjer4 00:32:35.839 11:59:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:35.839 11:59:03 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:35.839 11:59:03 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:35.839 11:59:03 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:35.839 11:59:03 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:35.839 11:59:03 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:35.839 11:59:03 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:35.839 11:59:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.gMsJgHjer4 00:32:35.839 11:59:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.gMsJgHjer4 00:32:35.839 11:59:03 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.gMsJgHjer4 00:32:35.839 11:59:03 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gMsJgHjer4 00:32:35.839 11:59:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gMsJgHjer4 00:32:36.096 11:59:03 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:36.096 11:59:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:36.096 nvme0n1 00:32:36.096 11:59:04 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:36.096 11:59:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:36.096 11:59:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:36.096 11:59:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:36.096 11:59:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:36.096 11:59:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:36.353 11:59:04 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:36.353 11:59:04 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:36.353 11:59:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:36.611 11:59:04 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:36.611 11:59:04 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:36.611 11:59:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:36.611 11:59:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:36.611 11:59:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:36.611 11:59:04 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:36.611 11:59:04 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:36.611 11:59:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:36.611 11:59:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:36.611 11:59:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:36.611 11:59:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:36.611 11:59:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:36.869 11:59:04 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:36.869 11:59:04 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:36.869 11:59:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:37.126 11:59:05 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:37.126 11:59:05 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:37.126 11:59:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:37.383 11:59:05 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:37.383 11:59:05 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gMsJgHjer4 00:32:37.383 11:59:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gMsJgHjer4 00:32:37.383 11:59:05 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.PSYQVH6BaI 00:32:37.383 11:59:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.PSYQVH6BaI 00:32:37.641 11:59:05 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:37.641 11:59:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:37.899 nvme0n1 00:32:37.899 11:59:05 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:37.899 11:59:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:38.158 11:59:06 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:38.158 "subsystems": [ 00:32:38.158 { 00:32:38.158 "subsystem": "keyring", 00:32:38.158 "config": [ 00:32:38.158 { 00:32:38.158 "method": "keyring_file_add_key", 00:32:38.158 "params": { 00:32:38.158 "name": "key0", 00:32:38.158 "path": "/tmp/tmp.gMsJgHjer4" 00:32:38.158 } 00:32:38.158 }, 00:32:38.158 { 00:32:38.158 "method": "keyring_file_add_key", 00:32:38.158 "params": { 00:32:38.158 "name": "key1", 00:32:38.158 "path": "/tmp/tmp.PSYQVH6BaI" 00:32:38.158 } 00:32:38.158 } 00:32:38.158 ] 00:32:38.158 }, 00:32:38.158 { 00:32:38.158 "subsystem": "iobuf", 00:32:38.158 "config": [ 00:32:38.158 { 00:32:38.158 "method": "iobuf_set_options", 00:32:38.158 "params": { 00:32:38.158 "small_pool_count": 8192, 00:32:38.158 "large_pool_count": 1024, 00:32:38.158 "small_bufsize": 8192, 00:32:38.158 "large_bufsize": 135168 00:32:38.158 } 00:32:38.158 } 00:32:38.158 ] 00:32:38.158 }, 00:32:38.158 { 00:32:38.158 "subsystem": "sock", 00:32:38.158 "config": [ 00:32:38.158 { 00:32:38.158 "method": "sock_set_default_impl", 00:32:38.158 "params": { 00:32:38.158 "impl_name": "posix" 00:32:38.158 } 00:32:38.158 }, 00:32:38.158 { 00:32:38.158 "method": "sock_impl_set_options", 00:32:38.158 "params": { 00:32:38.158 "impl_name": "ssl", 00:32:38.158 "recv_buf_size": 4096, 00:32:38.158 "send_buf_size": 4096, 00:32:38.158 "enable_recv_pipe": true, 00:32:38.158 "enable_quickack": false, 00:32:38.158 "enable_placement_id": 0, 00:32:38.158 "enable_zerocopy_send_server": true, 00:32:38.158 "enable_zerocopy_send_client": false, 00:32:38.158 "zerocopy_threshold": 0, 00:32:38.158 "tls_version": 0, 00:32:38.158 "enable_ktls": false 00:32:38.158 } 00:32:38.159 }, 00:32:38.159 { 00:32:38.159 "method": "sock_impl_set_options", 00:32:38.159 "params": { 00:32:38.159 "impl_name": "posix", 00:32:38.159 "recv_buf_size": 2097152, 00:32:38.159 "send_buf_size": 2097152, 00:32:38.159 "enable_recv_pipe": true, 00:32:38.159 "enable_quickack": false, 00:32:38.159 "enable_placement_id": 0, 00:32:38.159 "enable_zerocopy_send_server": true, 00:32:38.159 "enable_zerocopy_send_client": false, 00:32:38.159 "zerocopy_threshold": 0, 00:32:38.159 "tls_version": 0, 00:32:38.159 "enable_ktls": false 00:32:38.159 } 00:32:38.159 } 00:32:38.159 ] 00:32:38.159 }, 00:32:38.159 { 00:32:38.159 "subsystem": "vmd", 00:32:38.159 "config": [] 00:32:38.159 }, 00:32:38.159 { 00:32:38.159 "subsystem": "accel", 00:32:38.159 "config": [ 00:32:38.159 { 00:32:38.159 "method": "accel_set_options", 00:32:38.159 "params": { 00:32:38.159 "small_cache_size": 128, 00:32:38.159 "large_cache_size": 16, 00:32:38.159 "task_count": 2048, 00:32:38.159 "sequence_count": 2048, 00:32:38.159 "buf_count": 2048 00:32:38.159 } 00:32:38.159 } 00:32:38.159 ] 00:32:38.159 }, 00:32:38.159 { 00:32:38.159 "subsystem": "bdev", 00:32:38.159 "config": [ 00:32:38.159 { 00:32:38.159 "method": "bdev_set_options", 00:32:38.159 "params": { 00:32:38.159 "bdev_io_pool_size": 65535, 00:32:38.159 "bdev_io_cache_size": 256, 00:32:38.159 "bdev_auto_examine": true, 00:32:38.159 "iobuf_small_cache_size": 128, 00:32:38.159 "iobuf_large_cache_size": 16 00:32:38.159 } 00:32:38.159 }, 00:32:38.159 { 00:32:38.159 "method": "bdev_raid_set_options", 00:32:38.159 "params": { 00:32:38.159 "process_window_size_kb": 1024 00:32:38.159 } 00:32:38.159 }, 00:32:38.159 { 00:32:38.159 "method": "bdev_iscsi_set_options", 00:32:38.159 "params": { 00:32:38.159 "timeout_sec": 30 00:32:38.159 } 00:32:38.159 }, 00:32:38.159 { 00:32:38.159 "method": "bdev_nvme_set_options", 00:32:38.159 "params": { 00:32:38.159 "action_on_timeout": "none", 00:32:38.159 "timeout_us": 0, 00:32:38.159 "timeout_admin_us": 0, 00:32:38.159 "keep_alive_timeout_ms": 10000, 00:32:38.159 "arbitration_burst": 0, 00:32:38.159 "low_priority_weight": 0, 00:32:38.159 "medium_priority_weight": 0, 00:32:38.159 "high_priority_weight": 0, 00:32:38.159 "nvme_adminq_poll_period_us": 10000, 00:32:38.159 "nvme_ioq_poll_period_us": 0, 00:32:38.159 "io_queue_requests": 512, 00:32:38.159 "delay_cmd_submit": true, 00:32:38.159 "transport_retry_count": 4, 00:32:38.159 "bdev_retry_count": 3, 00:32:38.159 "transport_ack_timeout": 0, 00:32:38.159 "ctrlr_loss_timeout_sec": 0, 00:32:38.159 "reconnect_delay_sec": 0, 00:32:38.159 "fast_io_fail_timeout_sec": 0, 00:32:38.159 "disable_auto_failback": false, 00:32:38.159 "generate_uuids": false, 00:32:38.159 "transport_tos": 0, 00:32:38.159 "nvme_error_stat": false, 00:32:38.159 "rdma_srq_size": 0, 00:32:38.159 "io_path_stat": false, 00:32:38.159 "allow_accel_sequence": false, 00:32:38.159 "rdma_max_cq_size": 0, 00:32:38.159 "rdma_cm_event_timeout_ms": 0, 00:32:38.159 "dhchap_digests": [ 00:32:38.159 "sha256", 00:32:38.159 "sha384", 00:32:38.159 "sha512" 00:32:38.159 ], 00:32:38.159 "dhchap_dhgroups": [ 00:32:38.159 "null", 00:32:38.159 "ffdhe2048", 00:32:38.159 "ffdhe3072", 00:32:38.159 "ffdhe4096", 00:32:38.159 "ffdhe6144", 00:32:38.159 "ffdhe8192" 00:32:38.159 ] 00:32:38.159 } 00:32:38.159 }, 00:32:38.159 { 00:32:38.159 "method": "bdev_nvme_attach_controller", 00:32:38.159 "params": { 00:32:38.159 "name": "nvme0", 00:32:38.159 "trtype": "TCP", 00:32:38.159 "adrfam": "IPv4", 00:32:38.159 "traddr": "127.0.0.1", 00:32:38.159 "trsvcid": "4420", 00:32:38.159 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:38.159 "prchk_reftag": false, 00:32:38.159 "prchk_guard": false, 00:32:38.159 "ctrlr_loss_timeout_sec": 0, 00:32:38.159 "reconnect_delay_sec": 0, 00:32:38.159 "fast_io_fail_timeout_sec": 0, 00:32:38.159 "psk": "key0", 00:32:38.159 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:38.159 "hdgst": false, 00:32:38.159 "ddgst": false 00:32:38.159 } 00:32:38.159 }, 00:32:38.159 { 00:32:38.159 "method": "bdev_nvme_set_hotplug", 00:32:38.159 "params": { 00:32:38.159 "period_us": 100000, 00:32:38.159 "enable": false 00:32:38.159 } 00:32:38.159 }, 00:32:38.159 { 00:32:38.159 "method": "bdev_wait_for_examine" 00:32:38.159 } 00:32:38.159 ] 00:32:38.159 }, 00:32:38.159 { 00:32:38.159 "subsystem": "nbd", 00:32:38.159 "config": [] 00:32:38.159 } 00:32:38.159 ] 00:32:38.159 }' 00:32:38.159 11:59:06 keyring_file -- keyring/file.sh@114 -- # killprocess 2180220 00:32:38.159 11:59:06 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2180220 ']' 00:32:38.159 11:59:06 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2180220 00:32:38.159 11:59:06 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:38.159 11:59:06 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:38.159 11:59:06 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2180220 00:32:38.159 11:59:06 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:38.159 11:59:06 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:38.159 11:59:06 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2180220' 00:32:38.159 killing process with pid 2180220 00:32:38.159 11:59:06 keyring_file -- common/autotest_common.sh@967 -- # kill 2180220 00:32:38.159 Received shutdown signal, test time was about 1.000000 seconds 00:32:38.159 00:32:38.159 Latency(us) 00:32:38.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:38.159 =================================================================================================================== 00:32:38.159 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:38.159 11:59:06 keyring_file -- common/autotest_common.sh@972 -- # wait 2180220 00:32:38.418 11:59:06 keyring_file -- keyring/file.sh@117 -- # bperfpid=2181688 00:32:38.418 11:59:06 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2181688 /var/tmp/bperf.sock 00:32:38.418 11:59:06 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2181688 ']' 00:32:38.418 11:59:06 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:38.418 11:59:06 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:38.418 11:59:06 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:38.418 11:59:06 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:38.418 "subsystems": [ 00:32:38.418 { 00:32:38.418 "subsystem": "keyring", 00:32:38.418 "config": [ 00:32:38.418 { 00:32:38.418 "method": "keyring_file_add_key", 00:32:38.418 "params": { 00:32:38.418 "name": "key0", 00:32:38.418 "path": "/tmp/tmp.gMsJgHjer4" 00:32:38.418 } 00:32:38.418 }, 00:32:38.418 { 00:32:38.418 "method": "keyring_file_add_key", 00:32:38.418 "params": { 00:32:38.418 "name": "key1", 00:32:38.418 "path": "/tmp/tmp.PSYQVH6BaI" 00:32:38.418 } 00:32:38.418 } 00:32:38.418 ] 00:32:38.418 }, 00:32:38.418 { 00:32:38.418 "subsystem": "iobuf", 00:32:38.418 "config": [ 00:32:38.419 { 00:32:38.419 "method": "iobuf_set_options", 00:32:38.419 "params": { 00:32:38.419 "small_pool_count": 8192, 00:32:38.419 "large_pool_count": 1024, 00:32:38.419 "small_bufsize": 8192, 00:32:38.419 "large_bufsize": 135168 00:32:38.419 } 00:32:38.419 } 00:32:38.419 ] 00:32:38.419 }, 00:32:38.419 { 00:32:38.419 "subsystem": "sock", 00:32:38.419 "config": [ 00:32:38.419 { 00:32:38.419 "method": "sock_set_default_impl", 00:32:38.419 "params": { 00:32:38.419 "impl_name": "posix" 00:32:38.419 } 00:32:38.419 }, 00:32:38.419 { 00:32:38.419 "method": "sock_impl_set_options", 00:32:38.419 "params": { 00:32:38.419 "impl_name": "ssl", 00:32:38.419 "recv_buf_size": 4096, 00:32:38.419 "send_buf_size": 4096, 00:32:38.419 "enable_recv_pipe": true, 00:32:38.419 "enable_quickack": false, 00:32:38.419 "enable_placement_id": 0, 00:32:38.419 "enable_zerocopy_send_server": true, 00:32:38.419 "enable_zerocopy_send_client": false, 00:32:38.419 "zerocopy_threshold": 0, 00:32:38.419 "tls_version": 0, 00:32:38.419 "enable_ktls": false 00:32:38.419 } 00:32:38.419 }, 00:32:38.419 { 00:32:38.419 "method": "sock_impl_set_options", 00:32:38.419 "params": { 00:32:38.419 "impl_name": "posix", 00:32:38.419 "recv_buf_size": 2097152, 00:32:38.419 "send_buf_size": 2097152, 00:32:38.419 "enable_recv_pipe": true, 00:32:38.419 "enable_quickack": false, 00:32:38.419 "enable_placement_id": 0, 00:32:38.419 "enable_zerocopy_send_server": true, 00:32:38.419 "enable_zerocopy_send_client": false, 00:32:38.419 "zerocopy_threshold": 0, 00:32:38.419 "tls_version": 0, 00:32:38.419 "enable_ktls": false 00:32:38.419 } 00:32:38.419 } 00:32:38.419 ] 00:32:38.419 }, 00:32:38.419 { 00:32:38.419 "subsystem": "vmd", 00:32:38.419 "config": [] 00:32:38.419 }, 00:32:38.419 { 00:32:38.419 "subsystem": "accel", 00:32:38.419 "config": [ 00:32:38.419 { 00:32:38.419 "method": "accel_set_options", 00:32:38.419 "params": { 00:32:38.419 "small_cache_size": 128, 00:32:38.419 "large_cache_size": 16, 00:32:38.419 "task_count": 2048, 00:32:38.419 "sequence_count": 2048, 00:32:38.419 "buf_count": 2048 00:32:38.419 } 00:32:38.419 } 00:32:38.419 ] 00:32:38.419 }, 00:32:38.419 { 00:32:38.419 "subsystem": "bdev", 00:32:38.419 "config": [ 00:32:38.419 { 00:32:38.419 "method": "bdev_set_options", 00:32:38.419 "params": { 00:32:38.419 "bdev_io_pool_size": 65535, 00:32:38.419 "bdev_io_cache_size": 256, 00:32:38.419 "bdev_auto_examine": true, 00:32:38.419 "iobuf_small_cache_size": 128, 00:32:38.419 "iobuf_large_cache_size": 16 00:32:38.419 } 00:32:38.419 }, 00:32:38.419 { 00:32:38.419 "method": "bdev_raid_set_options", 00:32:38.419 "params": { 00:32:38.419 "process_window_size_kb": 1024 00:32:38.419 } 00:32:38.419 }, 00:32:38.419 { 00:32:38.419 "method": "bdev_iscsi_set_options", 00:32:38.419 "params": { 00:32:38.419 "timeout_sec": 30 00:32:38.419 } 00:32:38.419 }, 00:32:38.419 { 00:32:38.419 "method": "bdev_nvme_set_options", 00:32:38.419 "params": { 00:32:38.419 "action_on_timeout": "none", 00:32:38.419 "timeout_us": 0, 00:32:38.419 "timeout_admin_us": 0, 00:32:38.419 "keep_alive_timeout_ms": 10000, 00:32:38.419 "arbitration_burst": 0, 00:32:38.419 "low_priority_weight": 0, 00:32:38.419 "medium_priority_weight": 0, 00:32:38.419 "high_priority_weight": 0, 00:32:38.419 "nvme_adminq_poll_period_us": 10000, 00:32:38.419 "nvme_ioq_poll_period_us": 0, 00:32:38.419 "io_queue_requests": 512, 00:32:38.419 "delay_cmd_submit": true, 00:32:38.419 "transport_retry_count": 4, 00:32:38.419 "bdev_retry_count": 3, 00:32:38.419 "transport_ack_timeout": 0, 00:32:38.419 "ctrlr_loss_timeout_sec": 0, 00:32:38.419 "reconnect_delay_sec": 0, 00:32:38.419 "fast_io_fail_timeout_sec": 0, 00:32:38.419 "disable_auto_failback": false, 00:32:38.419 "generate_uuids": false, 00:32:38.419 "transport_tos": 0, 00:32:38.419 "nvme_error_stat": false, 00:32:38.419 "rdma_srq_size": 0, 00:32:38.419 "io_path_stat": false, 00:32:38.419 "allow_accel_sequence": false, 00:32:38.419 "rdma_max_cq_size": 0, 00:32:38.419 "rdma_cm_event_timeout_ms": 0, 00:32:38.419 "dhchap_digests": [ 00:32:38.419 "sha256", 00:32:38.419 "sha384", 00:32:38.419 "sha512" 00:32:38.419 ], 00:32:38.419 "dhchap_dhgroups": [ 00:32:38.419 "null", 00:32:38.419 "ffdhe2048", 00:32:38.419 "ffdhe3072", 00:32:38.419 "ffdhe4096", 00:32:38.419 "ffdhe6144", 00:32:38.419 "ffdhe8192" 00:32:38.419 ] 00:32:38.419 } 00:32:38.419 }, 00:32:38.419 { 00:32:38.419 "method": "bdev_nvme_attach_controller", 00:32:38.419 "params": { 00:32:38.419 "name": "nvme0", 00:32:38.419 "trtype": "TCP", 00:32:38.419 "adrfam": "IPv4", 00:32:38.419 "traddr": "127.0.0.1", 00:32:38.419 "trsvcid": "4420", 00:32:38.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:38.419 "prchk_reftag": false, 00:32:38.419 "prchk_guard": false, 00:32:38.419 "ctrlr_loss_timeout_sec": 0, 00:32:38.419 "reconnect_delay_sec": 0, 00:32:38.419 "fast_io_fail_timeout_sec": 0, 00:32:38.419 "psk": "key0", 00:32:38.419 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:38.419 "hdgst": false, 00:32:38.419 "ddgst": false 00:32:38.419 } 00:32:38.419 }, 00:32:38.419 { 00:32:38.419 "method": "bdev_nvme_set_hotplug", 00:32:38.419 "params": { 00:32:38.419 "period_us": 100000, 00:32:38.419 "enable": false 00:32:38.419 } 00:32:38.419 }, 00:32:38.419 { 00:32:38.419 "method": "bdev_wait_for_examine" 00:32:38.419 } 00:32:38.419 ] 00:32:38.419 }, 00:32:38.419 { 00:32:38.419 "subsystem": "nbd", 00:32:38.419 "config": [] 00:32:38.419 } 00:32:38.419 ] 00:32:38.419 }' 00:32:38.419 11:59:06 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:38.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:38.419 11:59:06 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:38.419 11:59:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:38.419 [2024-07-15 11:59:06.351034] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:32:38.419 [2024-07-15 11:59:06.351090] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2181688 ] 00:32:38.419 EAL: No free 2048 kB hugepages reported on node 1 00:32:38.419 [2024-07-15 11:59:06.421068] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:38.419 [2024-07-15 11:59:06.484825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:38.678 [2024-07-15 11:59:06.642996] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:39.279 11:59:07 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:39.279 11:59:07 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:32:39.279 11:59:07 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:39.279 11:59:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:39.279 11:59:07 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:39.279 11:59:07 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:39.279 11:59:07 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:39.279 11:59:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:39.279 11:59:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:39.279 11:59:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:39.279 11:59:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:39.279 11:59:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:39.538 11:59:07 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:39.538 11:59:07 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:39.538 11:59:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:39.538 11:59:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:39.538 11:59:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:39.538 11:59:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:39.538 11:59:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:39.796 11:59:07 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:39.796 11:59:07 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:39.796 11:59:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:39.796 11:59:07 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:39.796 11:59:07 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:39.796 11:59:07 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:39.796 11:59:07 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.gMsJgHjer4 /tmp/tmp.PSYQVH6BaI 00:32:39.796 11:59:07 keyring_file -- keyring/file.sh@20 -- # killprocess 2181688 00:32:39.796 11:59:07 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2181688 ']' 00:32:39.796 11:59:07 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2181688 00:32:39.796 11:59:07 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:39.796 11:59:07 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:39.796 11:59:07 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2181688 00:32:39.796 11:59:07 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:39.796 11:59:07 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:39.796 11:59:07 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2181688' 00:32:39.796 killing process with pid 2181688 00:32:39.796 11:59:07 keyring_file -- common/autotest_common.sh@967 -- # kill 2181688 00:32:39.796 Received shutdown signal, test time was about 1.000000 seconds 00:32:39.796 00:32:39.796 Latency(us) 00:32:39.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:39.796 =================================================================================================================== 00:32:39.796 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:39.796 11:59:07 keyring_file -- common/autotest_common.sh@972 -- # wait 2181688 00:32:40.055 11:59:08 keyring_file -- keyring/file.sh@21 -- # killprocess 2179979 00:32:40.055 11:59:08 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2179979 ']' 00:32:40.055 11:59:08 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2179979 00:32:40.055 11:59:08 keyring_file -- common/autotest_common.sh@953 -- # uname 00:32:40.055 11:59:08 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:40.055 11:59:08 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2179979 00:32:40.055 11:59:08 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:40.055 11:59:08 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:40.055 11:59:08 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2179979' 00:32:40.055 killing process with pid 2179979 00:32:40.055 11:59:08 keyring_file -- common/autotest_common.sh@967 -- # kill 2179979 00:32:40.055 [2024-07-15 11:59:08.128394] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:40.055 11:59:08 keyring_file -- common/autotest_common.sh@972 -- # wait 2179979 00:32:40.624 00:32:40.624 real 0m11.797s 00:32:40.624 user 0m27.177s 00:32:40.624 sys 0m3.355s 00:32:40.624 11:59:08 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:40.624 11:59:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:40.624 ************************************ 00:32:40.624 END TEST keyring_file 00:32:40.624 ************************************ 00:32:40.624 11:59:08 -- common/autotest_common.sh@1142 -- # return 0 00:32:40.624 11:59:08 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:32:40.624 11:59:08 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:40.624 11:59:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:40.624 11:59:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:40.624 11:59:08 -- common/autotest_common.sh@10 -- # set +x 00:32:40.624 ************************************ 00:32:40.624 START TEST keyring_linux 00:32:40.624 ************************************ 00:32:40.624 11:59:08 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:40.624 * Looking for test storage... 00:32:40.624 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:40.624 11:59:08 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:40.624 11:59:08 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:006f0d1b-21c0-e711-906e-00163566263e 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=006f0d1b-21c0-e711-906e-00163566263e 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:40.624 11:59:08 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:40.624 11:59:08 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:40.624 11:59:08 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:40.624 11:59:08 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.624 11:59:08 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.624 11:59:08 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.624 11:59:08 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:40.624 11:59:08 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:40.624 11:59:08 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:40.624 11:59:08 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:40.624 11:59:08 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:40.624 11:59:08 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:40.624 11:59:08 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:40.624 11:59:08 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:40.624 11:59:08 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:40.624 11:59:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:40.624 11:59:08 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:40.624 11:59:08 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:40.624 11:59:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:40.624 11:59:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:40.624 11:59:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:40.624 11:59:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:40.624 11:59:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:40.624 /tmp/:spdk-test:key0 00:32:40.624 11:59:08 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:40.624 11:59:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:40.624 11:59:08 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:40.624 11:59:08 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:40.624 11:59:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:40.624 11:59:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:40.624 11:59:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:40.624 11:59:08 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:40.883 11:59:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:40.883 11:59:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:40.883 /tmp/:spdk-test:key1 00:32:40.883 11:59:08 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2182271 00:32:40.883 11:59:08 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2182271 00:32:40.883 11:59:08 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:40.883 11:59:08 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2182271 ']' 00:32:40.883 11:59:08 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:40.883 11:59:08 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:40.883 11:59:08 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:40.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:40.883 11:59:08 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:40.883 11:59:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:40.883 [2024-07-15 11:59:08.796429] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:32:40.883 [2024-07-15 11:59:08.796488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182271 ] 00:32:40.883 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.883 [2024-07-15 11:59:08.863789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.883 [2024-07-15 11:59:08.937424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:41.819 11:59:09 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:41.819 11:59:09 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:41.819 11:59:09 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:41.819 11:59:09 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.819 11:59:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:41.819 [2024-07-15 11:59:09.587935] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:41.819 null0 00:32:41.819 [2024-07-15 11:59:09.619974] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:41.819 [2024-07-15 11:59:09.620319] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:41.819 11:59:09 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.819 11:59:09 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:41.819 878418996 00:32:41.819 11:59:09 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:41.819 187552412 00:32:41.819 11:59:09 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2182308 00:32:41.819 11:59:09 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:41.819 11:59:09 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2182308 /var/tmp/bperf.sock 00:32:41.819 11:59:09 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2182308 ']' 00:32:41.819 11:59:09 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:41.819 11:59:09 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:41.819 11:59:09 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:41.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:41.819 11:59:09 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:41.819 11:59:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:41.819 [2024-07-15 11:59:09.690308] Starting SPDK v24.09-pre git sha1 62a72093c / DPDK 24.03.0 initialization... 00:32:41.819 [2024-07-15 11:59:09.690356] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182308 ] 00:32:41.819 EAL: No free 2048 kB hugepages reported on node 1 00:32:41.819 [2024-07-15 11:59:09.760040] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.819 [2024-07-15 11:59:09.834044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:42.754 11:59:10 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:42.754 11:59:10 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:32:42.754 11:59:10 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:42.754 11:59:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:42.754 11:59:10 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:42.754 11:59:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:43.012 11:59:10 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:43.012 11:59:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:43.012 [2024-07-15 11:59:11.085994] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:43.271 nvme0n1 00:32:43.271 11:59:11 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:43.271 11:59:11 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:43.271 11:59:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:43.271 11:59:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:43.271 11:59:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:43.271 11:59:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:43.271 11:59:11 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:43.271 11:59:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:43.271 11:59:11 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:43.271 11:59:11 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:43.271 11:59:11 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:43.271 11:59:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:43.271 11:59:11 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:43.529 11:59:11 keyring_linux -- keyring/linux.sh@25 -- # sn=878418996 00:32:43.529 11:59:11 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:43.529 11:59:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:43.529 11:59:11 keyring_linux -- keyring/linux.sh@26 -- # [[ 878418996 == \8\7\8\4\1\8\9\9\6 ]] 00:32:43.529 11:59:11 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 878418996 00:32:43.529 11:59:11 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:43.529 11:59:11 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:43.529 Running I/O for 1 seconds... 00:32:44.905 00:32:44.905 Latency(us) 00:32:44.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.905 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:44.905 nvme0n1 : 1.01 13108.33 51.20 0.00 0.00 9726.09 4377.80 14784.92 00:32:44.905 =================================================================================================================== 00:32:44.905 Total : 13108.33 51.20 0.00 0.00 9726.09 4377.80 14784.92 00:32:44.905 0 00:32:44.905 11:59:12 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:44.905 11:59:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:44.905 11:59:12 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:44.905 11:59:12 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:44.905 11:59:12 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:44.905 11:59:12 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:44.905 11:59:12 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:44.905 11:59:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:44.905 11:59:13 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:44.905 11:59:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:44.905 11:59:13 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:44.905 11:59:13 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:44.905 11:59:13 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:32:44.905 11:59:13 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:44.905 11:59:13 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:32:44.905 11:59:13 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:44.905 11:59:13 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:32:44.905 11:59:13 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:44.905 11:59:13 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:44.905 11:59:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:45.163 [2024-07-15 11:59:13.161730] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:45.163 [2024-07-15 11:59:13.162322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b2760 (107): Transport endpoint is not connected 00:32:45.163 [2024-07-15 11:59:13.163317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6b2760 (9): Bad file descriptor 00:32:45.163 [2024-07-15 11:59:13.164318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:45.163 [2024-07-15 11:59:13.164338] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:45.163 [2024-07-15 11:59:13.164348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:45.163 request: 00:32:45.163 { 00:32:45.163 "name": "nvme0", 00:32:45.163 "trtype": "tcp", 00:32:45.163 "traddr": "127.0.0.1", 00:32:45.163 "adrfam": "ipv4", 00:32:45.163 "trsvcid": "4420", 00:32:45.163 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:45.163 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:45.163 "prchk_reftag": false, 00:32:45.163 "prchk_guard": false, 00:32:45.163 "hdgst": false, 00:32:45.163 "ddgst": false, 00:32:45.163 "psk": ":spdk-test:key1", 00:32:45.163 "method": "bdev_nvme_attach_controller", 00:32:45.163 "req_id": 1 00:32:45.163 } 00:32:45.163 Got JSON-RPC error response 00:32:45.163 response: 00:32:45.163 { 00:32:45.163 "code": -5, 00:32:45.163 "message": "Input/output error" 00:32:45.163 } 00:32:45.163 11:59:13 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:32:45.163 11:59:13 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:45.163 11:59:13 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:45.163 11:59:13 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:45.163 11:59:13 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:45.163 11:59:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:45.163 11:59:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:45.163 11:59:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:45.163 11:59:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:45.163 11:59:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:45.163 11:59:13 keyring_linux -- keyring/linux.sh@33 -- # sn=878418996 00:32:45.163 11:59:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 878418996 00:32:45.163 1 links removed 00:32:45.163 11:59:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:45.163 11:59:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:45.163 11:59:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:45.163 11:59:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:45.163 11:59:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:45.163 11:59:13 keyring_linux -- keyring/linux.sh@33 -- # sn=187552412 00:32:45.163 11:59:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 187552412 00:32:45.163 1 links removed 00:32:45.163 11:59:13 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2182308 00:32:45.163 11:59:13 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2182308 ']' 00:32:45.163 11:59:13 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2182308 00:32:45.163 11:59:13 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:32:45.163 11:59:13 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:45.163 11:59:13 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2182308 00:32:45.163 11:59:13 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:45.163 11:59:13 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:45.163 11:59:13 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2182308' 00:32:45.163 killing process with pid 2182308 00:32:45.163 11:59:13 keyring_linux -- common/autotest_common.sh@967 -- # kill 2182308 00:32:45.163 Received shutdown signal, test time was about 1.000000 seconds 00:32:45.163 00:32:45.163 Latency(us) 00:32:45.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.163 =================================================================================================================== 00:32:45.163 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:45.163 11:59:13 keyring_linux -- common/autotest_common.sh@972 -- # wait 2182308 00:32:45.421 11:59:13 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2182271 00:32:45.421 11:59:13 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2182271 ']' 00:32:45.421 11:59:13 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2182271 00:32:45.421 11:59:13 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:32:45.421 11:59:13 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:45.421 11:59:13 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2182271 00:32:45.421 11:59:13 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:45.421 11:59:13 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:45.421 11:59:13 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2182271' 00:32:45.421 killing process with pid 2182271 00:32:45.421 11:59:13 keyring_linux -- common/autotest_common.sh@967 -- # kill 2182271 00:32:45.421 11:59:13 keyring_linux -- common/autotest_common.sh@972 -- # wait 2182271 00:32:45.987 00:32:45.987 real 0m5.288s 00:32:45.987 user 0m9.104s 00:32:45.987 sys 0m1.665s 00:32:45.987 11:59:13 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:45.987 11:59:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:45.987 ************************************ 00:32:45.987 END TEST keyring_linux 00:32:45.987 ************************************ 00:32:45.987 11:59:13 -- common/autotest_common.sh@1142 -- # return 0 00:32:45.987 11:59:13 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:32:45.987 11:59:13 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:45.987 11:59:13 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:45.987 11:59:13 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:32:45.987 11:59:13 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:32:45.987 11:59:13 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:32:45.987 11:59:13 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:45.987 11:59:13 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:45.987 11:59:13 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:45.987 11:59:13 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:32:45.987 11:59:13 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:45.988 11:59:13 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:32:45.988 11:59:13 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:45.988 11:59:13 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:45.988 11:59:13 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:45.988 11:59:13 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:32:45.988 11:59:13 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:32:45.988 11:59:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:45.988 11:59:13 -- common/autotest_common.sh@10 -- # set +x 00:32:45.988 11:59:13 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:32:45.988 11:59:13 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:45.988 11:59:13 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:45.988 11:59:13 -- common/autotest_common.sh@10 -- # set +x 00:32:52.643 INFO: APP EXITING 00:32:52.643 INFO: killing all VMs 00:32:52.643 INFO: killing vhost app 00:32:52.643 INFO: EXIT DONE 00:32:55.173 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:32:55.173 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:32:55.173 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:32:55.173 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:32:55.431 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:32:55.431 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:32:55.431 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:32:55.431 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:32:55.431 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:32:55.431 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:32:55.431 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:32:55.431 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:32:55.431 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:32:55.431 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:32:55.431 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:32:55.689 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:32:55.689 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:32:58.215 Cleaning 00:32:58.215 Removing: /var/run/dpdk/spdk0/config 00:32:58.472 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:58.472 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:58.472 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:58.472 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:58.472 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:58.472 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:58.472 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:58.472 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:58.472 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:58.472 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:58.472 Removing: /var/run/dpdk/spdk1/config 00:32:58.472 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:58.472 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:58.472 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:58.472 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:58.472 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:58.472 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:58.472 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:58.472 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:58.472 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:58.472 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:58.472 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:58.472 Removing: /var/run/dpdk/spdk2/config 00:32:58.472 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:58.472 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:58.472 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:58.472 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:58.472 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:58.472 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:58.472 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:58.472 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:58.472 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:58.472 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:58.472 Removing: /var/run/dpdk/spdk3/config 00:32:58.472 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:58.472 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:58.472 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:58.472 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:58.472 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:58.472 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:58.472 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:58.472 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:58.472 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:58.472 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:58.472 Removing: /var/run/dpdk/spdk4/config 00:32:58.472 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:58.472 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:58.472 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:58.472 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:58.472 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:58.472 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:58.472 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:58.472 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:58.472 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:58.472 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:58.472 Removing: /dev/shm/bdev_svc_trace.1 00:32:58.472 Removing: /dev/shm/nvmf_trace.0 00:32:58.472 Removing: /dev/shm/spdk_tgt_trace.pid1776897 00:32:58.472 Removing: /var/run/dpdk/spdk0 00:32:58.472 Removing: /var/run/dpdk/spdk1 00:32:58.472 Removing: /var/run/dpdk/spdk2 00:32:58.472 Removing: /var/run/dpdk/spdk3 00:32:58.472 Removing: /var/run/dpdk/spdk4 00:32:58.472 Removing: /var/run/dpdk/spdk_pid1774437 00:32:58.472 Removing: /var/run/dpdk/spdk_pid1775693 00:32:58.472 Removing: /var/run/dpdk/spdk_pid1776897 00:32:58.472 Removing: /var/run/dpdk/spdk_pid1777600 00:32:58.472 Removing: /var/run/dpdk/spdk_pid1778457 00:32:58.472 Removing: /var/run/dpdk/spdk_pid1778704 00:32:58.472 Removing: /var/run/dpdk/spdk_pid1779803 00:32:58.472 Removing: /var/run/dpdk/spdk_pid1779891 00:32:58.472 Removing: /var/run/dpdk/spdk_pid1780188 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1781899 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1783332 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1783641 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1783965 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1784293 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1784624 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1784905 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1785079 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1785337 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1786245 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1789229 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1789525 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1789819 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1790078 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1790697 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1790791 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1791682 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1792007 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1792317 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1792581 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1792717 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1792891 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1793378 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1793543 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1793868 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1794167 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1794308 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1794500 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1794790 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1795067 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1795349 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1795571 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1795785 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1795992 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1796234 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1796511 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1796797 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1797076 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1797355 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1797642 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1797924 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1798203 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1798450 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1798696 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1798935 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1799170 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1799390 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1799662 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1799980 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1800313 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1804186 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1851071 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1855574 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1866003 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1871656 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1875893 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1876452 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1882698 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1889712 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1889773 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1890748 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1891548 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1892537 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1893122 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1893133 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1893395 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1893414 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1893487 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1894456 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1895252 00:32:58.730 Removing: /var/run/dpdk/spdk_pid1896311 00:32:58.731 Removing: /var/run/dpdk/spdk_pid1896850 00:32:58.731 Removing: /var/run/dpdk/spdk_pid1896921 00:32:58.731 Removing: /var/run/dpdk/spdk_pid1897189 00:32:58.731 Removing: /var/run/dpdk/spdk_pid1898519 00:32:58.731 Removing: /var/run/dpdk/spdk_pid1899634 00:32:58.731 Removing: /var/run/dpdk/spdk_pid1908304 00:32:58.731 Removing: /var/run/dpdk/spdk_pid1908584 00:32:58.731 Removing: /var/run/dpdk/spdk_pid1913094 00:32:58.988 Removing: /var/run/dpdk/spdk_pid1919195 00:32:58.988 Removing: /var/run/dpdk/spdk_pid1921899 00:32:58.988 Removing: /var/run/dpdk/spdk_pid1933251 00:32:58.988 Removing: /var/run/dpdk/spdk_pid1942604 00:32:58.988 Removing: /var/run/dpdk/spdk_pid1944433 00:32:58.988 Removing: /var/run/dpdk/spdk_pid1945481 00:32:58.988 Removing: /var/run/dpdk/spdk_pid1963151 00:32:58.988 Removing: /var/run/dpdk/spdk_pid1967346 00:32:58.988 Removing: /var/run/dpdk/spdk_pid1993162 00:32:58.988 Removing: /var/run/dpdk/spdk_pid1997940 00:32:58.988 Removing: /var/run/dpdk/spdk_pid1999540 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2001500 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2001657 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2001933 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2002206 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2002780 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2004641 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2005748 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2006312 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2008457 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2009236 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2009847 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2014914 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2025358 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2029544 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2035722 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2037150 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2038655 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2043206 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2047693 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2055410 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2055412 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2060197 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2060449 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2060710 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2061032 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2061126 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2066300 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2066942 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2071553 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2074230 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2080019 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2085737 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2094626 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2101846 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2101850 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2121483 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2122083 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2122837 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2123394 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2124387 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2125021 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2125582 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2126362 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2130761 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2131069 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2137208 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2137487 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2139780 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2148017 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2148022 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2153291 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2155826 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2157947 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2159039 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2161205 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2162378 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2171584 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2172106 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2172635 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2175075 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2175607 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2176135 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2179979 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2180220 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2181688 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2182271 00:32:58.988 Removing: /var/run/dpdk/spdk_pid2182308 00:32:58.988 Clean 00:32:59.245 11:59:27 -- common/autotest_common.sh@1451 -- # return 0 00:32:59.245 11:59:27 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:32:59.245 11:59:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:59.245 11:59:27 -- common/autotest_common.sh@10 -- # set +x 00:32:59.245 11:59:27 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:32:59.245 11:59:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:59.245 11:59:27 -- common/autotest_common.sh@10 -- # set +x 00:32:59.245 11:59:27 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:59.245 11:59:27 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:59.245 11:59:27 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:59.245 11:59:27 -- spdk/autotest.sh@391 -- # hash lcov 00:32:59.245 11:59:27 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:59.245 11:59:27 -- spdk/autotest.sh@393 -- # hostname 00:32:59.245 11:59:27 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-22 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:59.501 geninfo: WARNING: invalid characters removed from testname! 00:33:21.414 11:59:47 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:22.349 11:59:50 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:24.251 11:59:51 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:25.625 11:59:53 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:27.530 11:59:55 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:28.908 11:59:56 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:30.852 11:59:58 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:30.852 11:59:58 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:30.852 11:59:58 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:30.852 11:59:58 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:30.852 11:59:58 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:30.852 11:59:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.852 11:59:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.852 11:59:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.852 11:59:58 -- paths/export.sh@5 -- $ export PATH 00:33:30.852 11:59:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.852 11:59:58 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:30.852 11:59:58 -- common/autobuild_common.sh@444 -- $ date +%s 00:33:30.852 11:59:58 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721037598.XXXXXX 00:33:30.852 11:59:58 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721037598.URghSK 00:33:30.852 11:59:58 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:33:30.852 11:59:58 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:33:30.852 11:59:58 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:30.852 11:59:58 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:30.852 11:59:58 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:30.852 11:59:58 -- common/autobuild_common.sh@460 -- $ get_config_params 00:33:30.852 11:59:58 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:33:30.852 11:59:58 -- common/autotest_common.sh@10 -- $ set +x 00:33:30.852 11:59:58 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:30.852 11:59:58 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:33:30.852 11:59:58 -- pm/common@17 -- $ local monitor 00:33:30.852 11:59:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:30.852 11:59:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:30.852 11:59:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:30.852 11:59:58 -- pm/common@21 -- $ date +%s 00:33:30.852 11:59:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:30.852 11:59:58 -- pm/common@21 -- $ date +%s 00:33:30.852 11:59:58 -- pm/common@25 -- $ sleep 1 00:33:30.852 11:59:58 -- pm/common@21 -- $ date +%s 00:33:30.852 11:59:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721037598 00:33:30.852 11:59:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721037598 00:33:30.852 11:59:58 -- pm/common@21 -- $ date +%s 00:33:30.852 11:59:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721037598 00:33:30.852 11:59:58 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721037598 00:33:30.852 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721037598_collect-vmstat.pm.log 00:33:30.852 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721037598_collect-cpu-load.pm.log 00:33:30.852 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721037598_collect-cpu-temp.pm.log 00:33:30.852 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721037598_collect-bmc-pm.bmc.pm.log 00:33:31.787 11:59:59 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:33:31.787 11:59:59 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:33:31.787 11:59:59 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:31.787 11:59:59 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:31.787 11:59:59 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:31.787 11:59:59 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:31.787 11:59:59 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:31.787 11:59:59 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:31.787 11:59:59 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:31.787 11:59:59 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:31.787 11:59:59 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:31.787 11:59:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:31.787 11:59:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:31.787 11:59:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:31.787 11:59:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:31.787 11:59:59 -- pm/common@44 -- $ pid=2193347 00:33:31.787 11:59:59 -- pm/common@50 -- $ kill -TERM 2193347 00:33:31.787 11:59:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:31.787 11:59:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:31.787 11:59:59 -- pm/common@44 -- $ pid=2193348 00:33:31.787 11:59:59 -- pm/common@50 -- $ kill -TERM 2193348 00:33:31.787 11:59:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:31.787 11:59:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:31.787 11:59:59 -- pm/common@44 -- $ pid=2193350 00:33:31.787 11:59:59 -- pm/common@50 -- $ kill -TERM 2193350 00:33:31.787 11:59:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:31.787 11:59:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:31.788 11:59:59 -- pm/common@44 -- $ pid=2193378 00:33:31.788 11:59:59 -- pm/common@50 -- $ sudo -E kill -TERM 2193378 00:33:31.788 + [[ -n 1664904 ]] 00:33:31.788 + sudo kill 1664904 00:33:31.798 [Pipeline] } 00:33:31.822 [Pipeline] // stage 00:33:31.830 [Pipeline] } 00:33:31.853 [Pipeline] // timeout 00:33:31.859 [Pipeline] } 00:33:31.879 [Pipeline] // catchError 00:33:31.885 [Pipeline] } 00:33:31.906 [Pipeline] // wrap 00:33:31.914 [Pipeline] } 00:33:31.930 [Pipeline] // catchError 00:33:31.939 [Pipeline] stage 00:33:31.940 [Pipeline] { (Epilogue) 00:33:31.953 [Pipeline] catchError 00:33:31.955 [Pipeline] { 00:33:31.971 [Pipeline] echo 00:33:31.973 Cleanup processes 00:33:31.980 [Pipeline] sh 00:33:32.259 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:32.259 2193445 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:32.259 2193811 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:32.272 [Pipeline] sh 00:33:32.548 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:32.548 ++ grep -v 'sudo pgrep' 00:33:32.548 ++ awk '{print $1}' 00:33:32.548 + sudo kill -9 2193445 00:33:32.557 [Pipeline] sh 00:33:32.828 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:32.828 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:33:38.078 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,721 MiB 00:33:42.261 [Pipeline] sh 00:33:42.576 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:42.576 Artifacts sizes are good 00:33:42.589 [Pipeline] archiveArtifacts 00:33:42.596 Archiving artifacts 00:33:42.726 [Pipeline] sh 00:33:43.001 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:43.016 [Pipeline] cleanWs 00:33:43.026 [WS-CLEANUP] Deleting project workspace... 00:33:43.026 [WS-CLEANUP] Deferred wipeout is used... 00:33:43.033 [WS-CLEANUP] done 00:33:43.034 [Pipeline] } 00:33:43.053 [Pipeline] // catchError 00:33:43.063 [Pipeline] sh 00:33:43.334 + logger -p user.info -t JENKINS-CI 00:33:43.340 [Pipeline] } 00:33:43.351 [Pipeline] // stage 00:33:43.355 [Pipeline] } 00:33:43.365 [Pipeline] // node 00:33:43.370 [Pipeline] End of Pipeline 00:33:43.400 Finished: SUCCESS